Test Report: KVM_Linux_crio 17977

                    
                      afe619924c08f9e8f87f8c65127b26c11ec5ac1e:2024-04-29:34242
                    
                

Test fail (11/207)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-971694 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-971694 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.939643832s)

                                                
                                                
-- stdout --
	* [addons-971694] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-971694" primary control-plane node in "addons-971694" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image docker.io/marcnuri/yakd:0.0.4
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image docker.io/busybox:stable
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	* Verifying registry addon...
	* Verifying ingress addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-971694 service yakd-dashboard -n yakd-dashboard
	
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying csi-hostpath-driver addon...
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-971694 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, helm-tiller, metrics-server, inspektor-gadget, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth

                                                
                                                
-- /stdout --
** stderr ** 
	I0428 23:08:16.976563   21498 out.go:291] Setting OutFile to fd 1 ...
	I0428 23:08:16.976700   21498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:08:16.976709   21498 out.go:304] Setting ErrFile to fd 2...
	I0428 23:08:16.976713   21498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:08:16.976943   21498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0428 23:08:16.977561   21498 out.go:298] Setting JSON to false
	I0428 23:08:16.978445   21498 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3041,"bootTime":1714342656,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0428 23:08:16.978504   21498 start.go:139] virtualization: kvm guest
	I0428 23:08:16.980627   21498 out.go:177] * [addons-971694] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0428 23:08:16.982012   21498 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 23:08:16.983476   21498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 23:08:16.982074   21498 notify.go:220] Checking for updates...
	I0428 23:08:16.985015   21498 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:08:16.986384   21498 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:08:16.987638   21498 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0428 23:08:16.988808   21498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 23:08:16.990099   21498 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 23:08:17.021181   21498 out.go:177] * Using the kvm2 driver based on user configuration
	I0428 23:08:17.022421   21498 start.go:297] selected driver: kvm2
	I0428 23:08:17.022432   21498 start.go:901] validating driver "kvm2" against <nil>
	I0428 23:08:17.022442   21498 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 23:08:17.023130   21498 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 23:08:17.023204   21498 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0428 23:08:17.037611   21498 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0428 23:08:17.037649   21498 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 23:08:17.037873   21498 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 23:08:17.037936   21498 cni.go:84] Creating CNI manager for ""
	I0428 23:08:17.037953   21498 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0428 23:08:17.037968   21498 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0428 23:08:17.038067   21498 start.go:340] cluster config:
	{Name:addons-971694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-971694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:08:17.038169   21498 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 23:08:17.040001   21498 out.go:177] * Starting "addons-971694" primary control-plane node in "addons-971694" cluster
	I0428 23:08:17.041356   21498 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:08:17.041382   21498 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0428 23:08:17.041401   21498 cache.go:56] Caching tarball of preloaded images
	I0428 23:08:17.041476   21498 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0428 23:08:17.041499   21498 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0428 23:08:17.041804   21498 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/config.json ...
	I0428 23:08:17.041826   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/config.json: {Name:mke1f27fd604139ddac9b28ed75c38d6bfd93fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:17.041961   21498 start.go:360] acquireMachinesLock for addons-971694: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 23:08:17.042017   21498 start.go:364] duration metric: took 40.141µs to acquireMachinesLock for "addons-971694"
	I0428 23:08:17.042056   21498 start.go:93] Provisioning new machine with config: &{Name:addons-971694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-971694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:08:17.042122   21498 start.go:125] createHost starting for "" (driver="kvm2")
	I0428 23:08:17.043733   21498 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0428 23:08:17.043860   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:08:17.043904   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:08:17.057391   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41991
	I0428 23:08:17.057801   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:08:17.058339   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:08:17.058359   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:08:17.058704   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:08:17.058893   21498 main.go:141] libmachine: (addons-971694) Calling .GetMachineName
	I0428 23:08:17.059028   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:08:17.059175   21498 start.go:159] libmachine.API.Create for "addons-971694" (driver="kvm2")
	I0428 23:08:17.059199   21498 client.go:168] LocalClient.Create starting
	I0428 23:08:17.059233   21498 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem
	I0428 23:08:17.137592   21498 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem
	I0428 23:08:17.330122   21498 main.go:141] libmachine: Running pre-create checks...
	I0428 23:08:17.330142   21498 main.go:141] libmachine: (addons-971694) Calling .PreCreateCheck
	I0428 23:08:17.330625   21498 main.go:141] libmachine: (addons-971694) Calling .GetConfigRaw
	I0428 23:08:17.331027   21498 main.go:141] libmachine: Creating machine...
	I0428 23:08:17.331042   21498 main.go:141] libmachine: (addons-971694) Calling .Create
	I0428 23:08:17.331155   21498 main.go:141] libmachine: (addons-971694) Creating KVM machine...
	I0428 23:08:17.332282   21498 main.go:141] libmachine: (addons-971694) DBG | found existing default KVM network
	I0428 23:08:17.332971   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:17.332843   21520 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0428 23:08:17.333022   21498 main.go:141] libmachine: (addons-971694) DBG | created network xml: 
	I0428 23:08:17.333042   21498 main.go:141] libmachine: (addons-971694) DBG | <network>
	I0428 23:08:17.333057   21498 main.go:141] libmachine: (addons-971694) DBG |   <name>mk-addons-971694</name>
	I0428 23:08:17.333072   21498 main.go:141] libmachine: (addons-971694) DBG |   <dns enable='no'/>
	I0428 23:08:17.333079   21498 main.go:141] libmachine: (addons-971694) DBG |   
	I0428 23:08:17.333088   21498 main.go:141] libmachine: (addons-971694) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0428 23:08:17.333096   21498 main.go:141] libmachine: (addons-971694) DBG |     <dhcp>
	I0428 23:08:17.333102   21498 main.go:141] libmachine: (addons-971694) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0428 23:08:17.333109   21498 main.go:141] libmachine: (addons-971694) DBG |     </dhcp>
	I0428 23:08:17.333114   21498 main.go:141] libmachine: (addons-971694) DBG |   </ip>
	I0428 23:08:17.333122   21498 main.go:141] libmachine: (addons-971694) DBG |   
	I0428 23:08:17.333126   21498 main.go:141] libmachine: (addons-971694) DBG | </network>
	I0428 23:08:17.333134   21498 main.go:141] libmachine: (addons-971694) DBG | 
	I0428 23:08:17.338386   21498 main.go:141] libmachine: (addons-971694) DBG | trying to create private KVM network mk-addons-971694 192.168.39.0/24...
	I0428 23:08:17.402470   21498 main.go:141] libmachine: (addons-971694) DBG | private KVM network mk-addons-971694 192.168.39.0/24 created
	I0428 23:08:17.402496   21498 main.go:141] libmachine: (addons-971694) Setting up store path in /home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694 ...
	I0428 23:08:17.402516   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:17.402463   21520 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:08:17.402527   21498 main.go:141] libmachine: (addons-971694) Building disk image from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0428 23:08:17.402638   21498 main.go:141] libmachine: (addons-971694) Downloading /home/jenkins/minikube-integration/17977-13393/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 23:08:17.646623   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:17.646503   21520 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa...
	I0428 23:08:17.750443   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:17.750302   21520 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/addons-971694.rawdisk...
	I0428 23:08:17.750481   21498 main.go:141] libmachine: (addons-971694) DBG | Writing magic tar header
	I0428 23:08:17.750495   21498 main.go:141] libmachine: (addons-971694) DBG | Writing SSH key tar header
	I0428 23:08:17.750513   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:17.750417   21520 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694 ...
	I0428 23:08:17.750530   21498 main.go:141] libmachine: (addons-971694) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694
	I0428 23:08:17.750595   21498 main.go:141] libmachine: (addons-971694) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines
	I0428 23:08:17.750619   21498 main.go:141] libmachine: (addons-971694) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:08:17.750633   21498 main.go:141] libmachine: (addons-971694) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694 (perms=drwx------)
	I0428 23:08:17.750649   21498 main.go:141] libmachine: (addons-971694) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines (perms=drwxr-xr-x)
	I0428 23:08:17.750660   21498 main.go:141] libmachine: (addons-971694) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube (perms=drwxr-xr-x)
	I0428 23:08:17.750671   21498 main.go:141] libmachine: (addons-971694) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393 (perms=drwxrwxr-x)
	I0428 23:08:17.750678   21498 main.go:141] libmachine: (addons-971694) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0428 23:08:17.750685   21498 main.go:141] libmachine: (addons-971694) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0428 23:08:17.750693   21498 main.go:141] libmachine: (addons-971694) Creating domain...
	I0428 23:08:17.750708   21498 main.go:141] libmachine: (addons-971694) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393
	I0428 23:08:17.750723   21498 main.go:141] libmachine: (addons-971694) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0428 23:08:17.750732   21498 main.go:141] libmachine: (addons-971694) DBG | Checking permissions on dir: /home/jenkins
	I0428 23:08:17.750746   21498 main.go:141] libmachine: (addons-971694) DBG | Checking permissions on dir: /home
	I0428 23:08:17.750758   21498 main.go:141] libmachine: (addons-971694) DBG | Skipping /home - not owner
	I0428 23:08:17.751681   21498 main.go:141] libmachine: (addons-971694) define libvirt domain using xml: 
	I0428 23:08:17.751710   21498 main.go:141] libmachine: (addons-971694) <domain type='kvm'>
	I0428 23:08:17.751725   21498 main.go:141] libmachine: (addons-971694)   <name>addons-971694</name>
	I0428 23:08:17.751740   21498 main.go:141] libmachine: (addons-971694)   <memory unit='MiB'>4000</memory>
	I0428 23:08:17.751749   21498 main.go:141] libmachine: (addons-971694)   <vcpu>2</vcpu>
	I0428 23:08:17.751758   21498 main.go:141] libmachine: (addons-971694)   <features>
	I0428 23:08:17.751766   21498 main.go:141] libmachine: (addons-971694)     <acpi/>
	I0428 23:08:17.751780   21498 main.go:141] libmachine: (addons-971694)     <apic/>
	I0428 23:08:17.751785   21498 main.go:141] libmachine: (addons-971694)     <pae/>
	I0428 23:08:17.751789   21498 main.go:141] libmachine: (addons-971694)     
	I0428 23:08:17.751797   21498 main.go:141] libmachine: (addons-971694)   </features>
	I0428 23:08:17.751802   21498 main.go:141] libmachine: (addons-971694)   <cpu mode='host-passthrough'>
	I0428 23:08:17.751806   21498 main.go:141] libmachine: (addons-971694)   
	I0428 23:08:17.751848   21498 main.go:141] libmachine: (addons-971694)   </cpu>
	I0428 23:08:17.751857   21498 main.go:141] libmachine: (addons-971694)   <os>
	I0428 23:08:17.751862   21498 main.go:141] libmachine: (addons-971694)     <type>hvm</type>
	I0428 23:08:17.751867   21498 main.go:141] libmachine: (addons-971694)     <boot dev='cdrom'/>
	I0428 23:08:17.751873   21498 main.go:141] libmachine: (addons-971694)     <boot dev='hd'/>
	I0428 23:08:17.751900   21498 main.go:141] libmachine: (addons-971694)     <bootmenu enable='no'/>
	I0428 23:08:17.751920   21498 main.go:141] libmachine: (addons-971694)   </os>
	I0428 23:08:17.751931   21498 main.go:141] libmachine: (addons-971694)   <devices>
	I0428 23:08:17.751943   21498 main.go:141] libmachine: (addons-971694)     <disk type='file' device='cdrom'>
	I0428 23:08:17.751958   21498 main.go:141] libmachine: (addons-971694)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/boot2docker.iso'/>
	I0428 23:08:17.751969   21498 main.go:141] libmachine: (addons-971694)       <target dev='hdc' bus='scsi'/>
	I0428 23:08:17.751979   21498 main.go:141] libmachine: (addons-971694)       <readonly/>
	I0428 23:08:17.751987   21498 main.go:141] libmachine: (addons-971694)     </disk>
	I0428 23:08:17.752001   21498 main.go:141] libmachine: (addons-971694)     <disk type='file' device='disk'>
	I0428 23:08:17.752017   21498 main.go:141] libmachine: (addons-971694)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0428 23:08:17.752029   21498 main.go:141] libmachine: (addons-971694)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/addons-971694.rawdisk'/>
	I0428 23:08:17.752036   21498 main.go:141] libmachine: (addons-971694)       <target dev='hda' bus='virtio'/>
	I0428 23:08:17.752042   21498 main.go:141] libmachine: (addons-971694)     </disk>
	I0428 23:08:17.752049   21498 main.go:141] libmachine: (addons-971694)     <interface type='network'>
	I0428 23:08:17.752056   21498 main.go:141] libmachine: (addons-971694)       <source network='mk-addons-971694'/>
	I0428 23:08:17.752063   21498 main.go:141] libmachine: (addons-971694)       <model type='virtio'/>
	I0428 23:08:17.752068   21498 main.go:141] libmachine: (addons-971694)     </interface>
	I0428 23:08:17.752075   21498 main.go:141] libmachine: (addons-971694)     <interface type='network'>
	I0428 23:08:17.752092   21498 main.go:141] libmachine: (addons-971694)       <source network='default'/>
	I0428 23:08:17.752102   21498 main.go:141] libmachine: (addons-971694)       <model type='virtio'/>
	I0428 23:08:17.752110   21498 main.go:141] libmachine: (addons-971694)     </interface>
	I0428 23:08:17.752117   21498 main.go:141] libmachine: (addons-971694)     <serial type='pty'>
	I0428 23:08:17.752123   21498 main.go:141] libmachine: (addons-971694)       <target port='0'/>
	I0428 23:08:17.752129   21498 main.go:141] libmachine: (addons-971694)     </serial>
	I0428 23:08:17.752135   21498 main.go:141] libmachine: (addons-971694)     <console type='pty'>
	I0428 23:08:17.752144   21498 main.go:141] libmachine: (addons-971694)       <target type='serial' port='0'/>
	I0428 23:08:17.752152   21498 main.go:141] libmachine: (addons-971694)     </console>
	I0428 23:08:17.752156   21498 main.go:141] libmachine: (addons-971694)     <rng model='virtio'>
	I0428 23:08:17.752165   21498 main.go:141] libmachine: (addons-971694)       <backend model='random'>/dev/random</backend>
	I0428 23:08:17.752172   21498 main.go:141] libmachine: (addons-971694)     </rng>
	I0428 23:08:17.752177   21498 main.go:141] libmachine: (addons-971694)     
	I0428 23:08:17.752183   21498 main.go:141] libmachine: (addons-971694)     
	I0428 23:08:17.752188   21498 main.go:141] libmachine: (addons-971694)   </devices>
	I0428 23:08:17.752194   21498 main.go:141] libmachine: (addons-971694) </domain>
	I0428 23:08:17.752200   21498 main.go:141] libmachine: (addons-971694) 
	I0428 23:08:17.757491   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:04:09:2c in network default
	I0428 23:08:17.757997   21498 main.go:141] libmachine: (addons-971694) Ensuring networks are active...
	I0428 23:08:17.758013   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:17.758537   21498 main.go:141] libmachine: (addons-971694) Ensuring network default is active
	I0428 23:08:17.758786   21498 main.go:141] libmachine: (addons-971694) Ensuring network mk-addons-971694 is active
	I0428 23:08:17.759683   21498 main.go:141] libmachine: (addons-971694) Getting domain xml...
	I0428 23:08:17.760216   21498 main.go:141] libmachine: (addons-971694) Creating domain...
	I0428 23:08:19.108179   21498 main.go:141] libmachine: (addons-971694) Waiting to get IP...
	I0428 23:08:19.109008   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:19.109390   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:19.109437   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:19.109366   21520 retry.go:31] will retry after 296.607439ms: waiting for machine to come up
	I0428 23:08:19.407935   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:19.408323   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:19.408355   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:19.408290   21520 retry.go:31] will retry after 289.016818ms: waiting for machine to come up
	I0428 23:08:19.698715   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:19.699021   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:19.699069   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:19.699000   21520 retry.go:31] will retry after 373.291407ms: waiting for machine to come up
	I0428 23:08:20.073473   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:20.073905   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:20.073939   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:20.073858   21520 retry.go:31] will retry after 384.322825ms: waiting for machine to come up
	I0428 23:08:20.460092   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:20.460527   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:20.460555   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:20.460478   21520 retry.go:31] will retry after 569.0562ms: waiting for machine to come up
	I0428 23:08:21.031105   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:21.031446   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:21.031482   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:21.031394   21520 retry.go:31] will retry after 596.381126ms: waiting for machine to come up
	I0428 23:08:21.629143   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:21.629543   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:21.629574   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:21.629503   21520 retry.go:31] will retry after 1.014220514s: waiting for machine to come up
	I0428 23:08:22.645139   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:22.645547   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:22.645575   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:22.645502   21520 retry.go:31] will retry after 1.161580533s: waiting for machine to come up
	I0428 23:08:23.808778   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:23.809207   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:23.809233   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:23.809153   21520 retry.go:31] will retry after 1.150388059s: waiting for machine to come up
	I0428 23:08:24.961492   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:24.961875   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:24.961906   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:24.961835   21520 retry.go:31] will retry after 1.900313091s: waiting for machine to come up
	I0428 23:08:26.863298   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:26.863743   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:26.863769   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:26.863698   21520 retry.go:31] will retry after 2.33130277s: waiting for machine to come up
	I0428 23:08:29.198011   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:29.198420   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:29.198443   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:29.198383   21520 retry.go:31] will retry after 3.190930952s: waiting for machine to come up
	I0428 23:08:32.391272   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:32.391686   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:32.391717   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:32.391646   21520 retry.go:31] will retry after 3.170538093s: waiting for machine to come up
	I0428 23:08:35.567192   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:35.567716   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find current IP address of domain addons-971694 in network mk-addons-971694
	I0428 23:08:35.567740   21498 main.go:141] libmachine: (addons-971694) DBG | I0428 23:08:35.567680   21520 retry.go:31] will retry after 5.405183393s: waiting for machine to come up
	I0428 23:08:40.977330   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:40.977710   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has current primary IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:40.977723   21498 main.go:141] libmachine: (addons-971694) Found IP for machine: 192.168.39.130
	I0428 23:08:40.977733   21498 main.go:141] libmachine: (addons-971694) Reserving static IP address...
	I0428 23:08:40.978065   21498 main.go:141] libmachine: (addons-971694) DBG | unable to find host DHCP lease matching {name: "addons-971694", mac: "52:54:00:36:e2:9e", ip: "192.168.39.130"} in network mk-addons-971694
	I0428 23:08:41.049954   21498 main.go:141] libmachine: (addons-971694) Reserved static IP address: 192.168.39.130
	I0428 23:08:41.049984   21498 main.go:141] libmachine: (addons-971694) Waiting for SSH to be available...
	I0428 23:08:41.049993   21498 main.go:141] libmachine: (addons-971694) DBG | Getting to WaitForSSH function...
	I0428 23:08:41.053057   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.053410   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:minikube Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:41.053440   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.053590   21498 main.go:141] libmachine: (addons-971694) DBG | Using SSH client type: external
	I0428 23:08:41.053628   21498 main.go:141] libmachine: (addons-971694) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa (-rw-------)
	I0428 23:08:41.053666   21498 main.go:141] libmachine: (addons-971694) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0428 23:08:41.053693   21498 main.go:141] libmachine: (addons-971694) DBG | About to run SSH command:
	I0428 23:08:41.053704   21498 main.go:141] libmachine: (addons-971694) DBG | exit 0
	I0428 23:08:41.190365   21498 main.go:141] libmachine: (addons-971694) DBG | SSH cmd err, output: <nil>: 
	I0428 23:08:41.190655   21498 main.go:141] libmachine: (addons-971694) KVM machine creation complete!
	I0428 23:08:41.191046   21498 main.go:141] libmachine: (addons-971694) Calling .GetConfigRaw
	I0428 23:08:41.191588   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:08:41.191786   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:08:41.191908   21498 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0428 23:08:41.191920   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:08:41.193115   21498 main.go:141] libmachine: Detecting operating system of created instance...
	I0428 23:08:41.193134   21498 main.go:141] libmachine: Waiting for SSH to be available...
	I0428 23:08:41.193142   21498 main.go:141] libmachine: Getting to WaitForSSH function...
	I0428 23:08:41.193152   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:08:41.195240   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.195626   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:41.195657   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.195702   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:08:41.195889   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:41.196023   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:41.196119   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:08:41.196264   21498 main.go:141] libmachine: Using SSH client type: native
	I0428 23:08:41.196451   21498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0428 23:08:41.196463   21498 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0428 23:08:41.309592   21498 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:08:41.309612   21498 main.go:141] libmachine: Detecting the provisioner...
	I0428 23:08:41.309620   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:08:41.312247   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.312641   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:41.312670   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.312800   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:08:41.313019   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:41.313194   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:41.313316   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:08:41.313446   21498 main.go:141] libmachine: Using SSH client type: native
	I0428 23:08:41.313634   21498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0428 23:08:41.313648   21498 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0428 23:08:41.427303   21498 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0428 23:08:41.427380   21498 main.go:141] libmachine: found compatible host: buildroot
	I0428 23:08:41.427395   21498 main.go:141] libmachine: Provisioning with buildroot...
	I0428 23:08:41.427410   21498 main.go:141] libmachine: (addons-971694) Calling .GetMachineName
	I0428 23:08:41.427674   21498 buildroot.go:166] provisioning hostname "addons-971694"
	I0428 23:08:41.427703   21498 main.go:141] libmachine: (addons-971694) Calling .GetMachineName
	I0428 23:08:41.427898   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:08:41.430307   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.430654   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:41.430676   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.430819   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:08:41.430978   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:41.431124   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:41.431243   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:08:41.431380   21498 main.go:141] libmachine: Using SSH client type: native
	I0428 23:08:41.431554   21498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0428 23:08:41.431565   21498 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-971694 && echo "addons-971694" | sudo tee /etc/hostname
	I0428 23:08:41.557524   21498 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-971694
	
	I0428 23:08:41.557551   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:08:41.560062   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.560396   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:41.560442   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.560646   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:08:41.560831   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:41.561000   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:41.561129   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:08:41.561276   21498 main.go:141] libmachine: Using SSH client type: native
	I0428 23:08:41.561471   21498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0428 23:08:41.561489   21498 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-971694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-971694/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-971694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 23:08:41.680226   21498 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:08:41.680269   21498 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0428 23:08:41.680301   21498 buildroot.go:174] setting up certificates
	I0428 23:08:41.680311   21498 provision.go:84] configureAuth start
	I0428 23:08:41.680327   21498 main.go:141] libmachine: (addons-971694) Calling .GetMachineName
	I0428 23:08:41.680581   21498 main.go:141] libmachine: (addons-971694) Calling .GetIP
	I0428 23:08:41.682925   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.683264   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:41.683323   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.683469   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:08:41.685268   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.685552   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:41.685573   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.685698   21498 provision.go:143] copyHostCerts
	I0428 23:08:41.685765   21498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0428 23:08:41.685908   21498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0428 23:08:41.685991   21498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0428 23:08:41.686121   21498 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.addons-971694 san=[127.0.0.1 192.168.39.130 addons-971694 localhost minikube]
	I0428 23:08:41.958013   21498 provision.go:177] copyRemoteCerts
	I0428 23:08:41.958116   21498 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 23:08:41.958139   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:08:41.960776   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.961101   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:41.961130   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:41.961263   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:08:41.961457   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:41.961595   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:08:41.961730   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:08:42.049088   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 23:08:42.075955   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 23:08:42.101902   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0428 23:08:42.128548   21498 provision.go:87] duration metric: took 448.22678ms to configureAuth
	I0428 23:08:42.128578   21498 buildroot.go:189] setting minikube options for container-runtime
	I0428 23:08:42.128728   21498 config.go:182] Loaded profile config "addons-971694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:08:42.128793   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:08:42.131214   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.131530   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:42.131560   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.131688   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:08:42.131869   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:42.132004   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:42.132131   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:08:42.132266   21498 main.go:141] libmachine: Using SSH client type: native
	I0428 23:08:42.132477   21498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0428 23:08:42.132509   21498 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0428 23:08:42.430581   21498 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0428 23:08:42.430610   21498 main.go:141] libmachine: Checking connection to Docker...
	I0428 23:08:42.430619   21498 main.go:141] libmachine: (addons-971694) Calling .GetURL
	I0428 23:08:42.431957   21498 main.go:141] libmachine: (addons-971694) DBG | Using libvirt version 6000000
	I0428 23:08:42.434159   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.434518   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:42.434545   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.434715   21498 main.go:141] libmachine: Docker is up and running!
	I0428 23:08:42.434730   21498 main.go:141] libmachine: Reticulating splines...
	I0428 23:08:42.434737   21498 client.go:171] duration metric: took 25.375529995s to LocalClient.Create
	I0428 23:08:42.434760   21498 start.go:167] duration metric: took 25.375585054s to libmachine.API.Create "addons-971694"
	I0428 23:08:42.434773   21498 start.go:293] postStartSetup for "addons-971694" (driver="kvm2")
	I0428 23:08:42.434785   21498 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 23:08:42.434805   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:08:42.435041   21498 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 23:08:42.435058   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:08:42.436868   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.437188   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:42.437220   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.437342   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:08:42.437523   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:42.437683   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:08:42.437803   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:08:42.524927   21498 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 23:08:42.529585   21498 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 23:08:42.529607   21498 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0428 23:08:42.529677   21498 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0428 23:08:42.529702   21498 start.go:296] duration metric: took 94.923154ms for postStartSetup
	I0428 23:08:42.529730   21498 main.go:141] libmachine: (addons-971694) Calling .GetConfigRaw
	I0428 23:08:42.530344   21498 main.go:141] libmachine: (addons-971694) Calling .GetIP
	I0428 23:08:42.533325   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.533663   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:42.533685   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.533941   21498 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/config.json ...
	I0428 23:08:42.534168   21498 start.go:128] duration metric: took 25.492033555s to createHost
	I0428 23:08:42.534190   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:08:42.536182   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.536472   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:42.536504   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.536602   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:08:42.536731   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:42.536819   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:42.536947   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:08:42.537179   21498 main.go:141] libmachine: Using SSH client type: native
	I0428 23:08:42.537339   21498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0428 23:08:42.537354   21498 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0428 23:08:42.647392   21498 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714345722.617957899
	
	I0428 23:08:42.647423   21498 fix.go:216] guest clock: 1714345722.617957899
	I0428 23:08:42.647431   21498 fix.go:229] Guest: 2024-04-28 23:08:42.617957899 +0000 UTC Remote: 2024-04-28 23:08:42.5341808 +0000 UTC m=+25.604341703 (delta=83.777099ms)
	I0428 23:08:42.647451   21498 fix.go:200] guest clock delta is within tolerance: 83.777099ms
	I0428 23:08:42.647456   21498 start.go:83] releasing machines lock for "addons-971694", held for 25.605412391s
	I0428 23:08:42.647473   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:08:42.647736   21498 main.go:141] libmachine: (addons-971694) Calling .GetIP
	I0428 23:08:42.650247   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.650552   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:42.650583   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.650683   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:08:42.651173   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:08:42.651342   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:08:42.651433   21498 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 23:08:42.651472   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:08:42.651554   21498 ssh_runner.go:195] Run: cat /version.json
	I0428 23:08:42.651578   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:08:42.654080   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.654244   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.654463   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:42.654490   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.654614   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:42.654626   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:08:42.654638   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:42.654810   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:08:42.654845   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:42.654952   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:08:42.654963   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:08:42.655112   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:08:42.655129   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:08:42.655262   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:08:42.744477   21498 ssh_runner.go:195] Run: systemctl --version
	I0428 23:08:42.770091   21498 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0428 23:08:42.938684   21498 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 23:08:42.945217   21498 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 23:08:42.945284   21498 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 23:08:42.964445   21498 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 23:08:42.964470   21498 start.go:494] detecting cgroup driver to use...
	I0428 23:08:42.964537   21498 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 23:08:42.983771   21498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 23:08:42.999341   21498 docker.go:217] disabling cri-docker service (if available) ...
	I0428 23:08:42.999398   21498 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0428 23:08:43.014464   21498 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0428 23:08:43.029731   21498 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0428 23:08:43.158805   21498 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0428 23:08:43.310416   21498 docker.go:233] disabling docker service ...
	I0428 23:08:43.310474   21498 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0428 23:08:43.327376   21498 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0428 23:08:43.340498   21498 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0428 23:08:43.482174   21498 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0428 23:08:43.610477   21498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0428 23:08:43.625951   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 23:08:43.645779   21498 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0428 23:08:43.645847   21498 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:08:43.656544   21498 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0428 23:08:43.656614   21498 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:08:43.667446   21498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:08:43.677846   21498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:08:43.688395   21498 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 23:08:43.699426   21498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:08:43.710582   21498 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:08:43.730801   21498 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:08:43.742410   21498 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 23:08:43.752491   21498 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0428 23:08:43.752552   21498 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0428 23:08:43.765744   21498 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 23:08:43.776695   21498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:08:43.888008   21498 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0428 23:08:44.031968   21498 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0428 23:08:44.032046   21498 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0428 23:08:44.037590   21498 start.go:562] Will wait 60s for crictl version
	I0428 23:08:44.037654   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:08:44.041795   21498 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 23:08:44.081786   21498 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0428 23:08:44.081912   21498 ssh_runner.go:195] Run: crio --version
	I0428 23:08:44.111087   21498 ssh_runner.go:195] Run: crio --version
	I0428 23:08:44.142943   21498 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0428 23:08:44.144168   21498 main.go:141] libmachine: (addons-971694) Calling .GetIP
	I0428 23:08:44.146661   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:44.146947   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:08:44.146968   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:08:44.147095   21498 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0428 23:08:44.151778   21498 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:08:44.165553   21498 kubeadm.go:877] updating cluster {Name:addons-971694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-971694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 23:08:44.165693   21498 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:08:44.165740   21498 ssh_runner.go:195] Run: sudo crictl images --output json
	I0428 23:08:44.202496   21498 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0428 23:08:44.202554   21498 ssh_runner.go:195] Run: which lz4
	I0428 23:08:44.206959   21498 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0428 23:08:44.211561   21498 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 23:08:44.211584   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0428 23:08:45.731333   21498 crio.go:462] duration metric: took 1.524409656s to copy over tarball
	I0428 23:08:45.731403   21498 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 23:08:48.303013   21498 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.571581629s)
	I0428 23:08:48.303045   21498 crio.go:469] duration metric: took 2.571681573s to extract the tarball
	I0428 23:08:48.303055   21498 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 23:08:48.342083   21498 ssh_runner.go:195] Run: sudo crictl images --output json
	I0428 23:08:48.386346   21498 crio.go:514] all images are preloaded for cri-o runtime.
	I0428 23:08:48.386366   21498 cache_images.go:84] Images are preloaded, skipping loading
	I0428 23:08:48.386372   21498 kubeadm.go:928] updating node { 192.168.39.130 8443 v1.30.0 crio true true} ...
	I0428 23:08:48.386478   21498 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-971694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-971694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 23:08:48.386548   21498 ssh_runner.go:195] Run: crio config
	I0428 23:08:48.438259   21498 cni.go:84] Creating CNI manager for ""
	I0428 23:08:48.438281   21498 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0428 23:08:48.438290   21498 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 23:08:48.438310   21498 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.130 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-971694 NodeName:addons-971694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 23:08:48.438453   21498 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-971694"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 23:08:48.438511   21498 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 23:08:48.449879   21498 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 23:08:48.449936   21498 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0428 23:08:48.461415   21498 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0428 23:08:48.480329   21498 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 23:08:48.499428   21498 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0428 23:08:48.519240   21498 ssh_runner.go:195] Run: grep 192.168.39.130	control-plane.minikube.internal$ /etc/hosts
	I0428 23:08:48.523708   21498 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:08:48.538253   21498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:08:48.679990   21498 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:08:48.700459   21498 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694 for IP: 192.168.39.130
	I0428 23:08:48.700485   21498 certs.go:194] generating shared ca certs ...
	I0428 23:08:48.700500   21498 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:48.700640   21498 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0428 23:08:48.875637   21498 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt ...
	I0428 23:08:48.875667   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt: {Name:mk839b8fe24e02c34606db9edb3e7a7b41d28fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:48.875851   21498 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key ...
	I0428 23:08:48.875870   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key: {Name:mkea4c916ad6e3d3d484faa3215506e2d36bc456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:48.875969   21498 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0428 23:08:49.032252   21498 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt ...
	I0428 23:08:49.032281   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt: {Name:mkd72e8423bc6299e083afcfd0469f070ea710b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:49.032496   21498 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key ...
	I0428 23:08:49.032511   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key: {Name:mkd02fa31dbad2aa3a268c124ed3067569bf4339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:49.032605   21498 certs.go:256] generating profile certs ...
	I0428 23:08:49.032691   21498 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/client.key
	I0428 23:08:49.032706   21498 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/client.crt with IP's: []
	I0428 23:08:49.275382   21498 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/client.crt ...
	I0428 23:08:49.275413   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/client.crt: {Name:mkfd58add8fd14fd1eaf69671271e66bfaabde29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:49.275599   21498 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/client.key ...
	I0428 23:08:49.275615   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/client.key: {Name:mk7f1338e594e5a03a86d6cbeac0487b696676a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:49.275724   21498 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.key.68304783
	I0428 23:08:49.275755   21498 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.crt.68304783 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.130]
	I0428 23:08:49.506926   21498 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.crt.68304783 ...
	I0428 23:08:49.506955   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.crt.68304783: {Name:mk84a831543c7b6a9391a9ffe6bc9672036afcd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:49.507154   21498 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.key.68304783 ...
	I0428 23:08:49.507177   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.key.68304783: {Name:mk5fa55b9da04d8d8af20df80deb112b22809cf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:49.507285   21498 certs.go:381] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.crt.68304783 -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.crt
	I0428 23:08:49.507394   21498 certs.go:385] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.key.68304783 -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.key
	I0428 23:08:49.507468   21498 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/proxy-client.key
	I0428 23:08:49.507491   21498 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/proxy-client.crt with IP's: []
	I0428 23:08:49.631135   21498 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/proxy-client.crt ...
	I0428 23:08:49.631163   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/proxy-client.crt: {Name:mkd4093da5cde66c2cffe2cdfb7dafbff13ed404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:49.631333   21498 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/proxy-client.key ...
	I0428 23:08:49.631350   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/proxy-client.key: {Name:mk7b3f5adc451a17542163a14c96ee48ebe95d8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:08:49.631561   21498 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0428 23:08:49.631602   21498 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0428 23:08:49.631637   21498 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0428 23:08:49.631675   21498 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0428 23:08:49.632298   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 23:08:49.663795   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0428 23:08:49.694272   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 23:08:49.724533   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 23:08:49.751954   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0428 23:08:49.780901   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 23:08:49.809784   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 23:08:49.838392   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/addons-971694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0428 23:08:49.865120   21498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 23:08:49.891954   21498 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 23:08:49.911453   21498 ssh_runner.go:195] Run: openssl version
	I0428 23:08:49.917930   21498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 23:08:49.931470   21498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:08:49.936717   21498 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:08:49.936778   21498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:08:49.943359   21498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 23:08:49.956735   21498 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 23:08:49.961565   21498 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 23:08:49.961617   21498 kubeadm.go:391] StartCluster: {Name:addons-971694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-971694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:08:49.961693   21498 cri.go:56] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0428 23:08:49.961757   21498 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0428 23:08:50.002126   21498 cri.go:91] found id: ""
	I0428 23:08:50.002198   21498 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 23:08:50.014353   21498 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 23:08:50.026933   21498 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 23:08:50.038880   21498 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 23:08:50.038900   21498 kubeadm.go:156] found existing configuration files:
	
	I0428 23:08:50.038953   21498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 23:08:50.050088   21498 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 23:08:50.050166   21498 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 23:08:50.061769   21498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 23:08:50.074822   21498 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 23:08:50.074892   21498 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 23:08:50.085608   21498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 23:08:50.097946   21498 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 23:08:50.098013   21498 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 23:08:50.108145   21498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 23:08:50.118064   21498 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 23:08:50.118132   21498 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 23:08:50.128295   21498 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 23:08:50.324623   21498 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 23:09:01.055970   21498 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 23:09:01.056046   21498 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 23:09:01.056148   21498 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 23:09:01.056263   21498 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 23:09:01.056395   21498 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 23:09:01.056476   21498 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 23:09:01.058438   21498 out.go:204]   - Generating certificates and keys ...
	I0428 23:09:01.058504   21498 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 23:09:01.058592   21498 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 23:09:01.058700   21498 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 23:09:01.058784   21498 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 23:09:01.058876   21498 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 23:09:01.058951   21498 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 23:09:01.059034   21498 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 23:09:01.059146   21498 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-971694 localhost] and IPs [192.168.39.130 127.0.0.1 ::1]
	I0428 23:09:01.059196   21498 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 23:09:01.059328   21498 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-971694 localhost] and IPs [192.168.39.130 127.0.0.1 ::1]
	I0428 23:09:01.059406   21498 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 23:09:01.059492   21498 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 23:09:01.059550   21498 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 23:09:01.059598   21498 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 23:09:01.059652   21498 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 23:09:01.059711   21498 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 23:09:01.059766   21498 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 23:09:01.059830   21498 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 23:09:01.059902   21498 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 23:09:01.059971   21498 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 23:09:01.060032   21498 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 23:09:01.061722   21498 out.go:204]   - Booting up control plane ...
	I0428 23:09:01.061820   21498 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 23:09:01.061890   21498 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 23:09:01.061947   21498 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 23:09:01.062079   21498 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 23:09:01.062170   21498 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 23:09:01.062229   21498 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 23:09:01.062380   21498 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 23:09:01.062477   21498 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 23:09:01.062536   21498 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001168013s
	I0428 23:09:01.062598   21498 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 23:09:01.062647   21498 kubeadm.go:309] [api-check] The API server is healthy after 5.001898028s
	I0428 23:09:01.062741   21498 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 23:09:01.062859   21498 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 23:09:01.062915   21498 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 23:09:01.063089   21498 kubeadm.go:309] [mark-control-plane] Marking the node addons-971694 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 23:09:01.063159   21498 kubeadm.go:309] [bootstrap-token] Using token: xr6z4d.yarjymvs908hmll3
	I0428 23:09:01.065679   21498 out.go:204]   - Configuring RBAC rules ...
	I0428 23:09:01.065789   21498 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 23:09:01.065894   21498 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 23:09:01.066088   21498 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 23:09:01.066246   21498 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 23:09:01.066390   21498 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 23:09:01.066491   21498 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 23:09:01.066649   21498 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 23:09:01.066811   21498 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 23:09:01.066864   21498 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 23:09:01.066871   21498 kubeadm.go:309] 
	I0428 23:09:01.066934   21498 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 23:09:01.066948   21498 kubeadm.go:309] 
	I0428 23:09:01.067015   21498 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 23:09:01.067022   21498 kubeadm.go:309] 
	I0428 23:09:01.067074   21498 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 23:09:01.067129   21498 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 23:09:01.067184   21498 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 23:09:01.067191   21498 kubeadm.go:309] 
	I0428 23:09:01.067275   21498 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 23:09:01.067286   21498 kubeadm.go:309] 
	I0428 23:09:01.067342   21498 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 23:09:01.067354   21498 kubeadm.go:309] 
	I0428 23:09:01.067396   21498 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 23:09:01.067461   21498 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 23:09:01.067520   21498 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 23:09:01.067533   21498 kubeadm.go:309] 
	I0428 23:09:01.067611   21498 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 23:09:01.067690   21498 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 23:09:01.067716   21498 kubeadm.go:309] 
	I0428 23:09:01.067817   21498 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xr6z4d.yarjymvs908hmll3 \
	I0428 23:09:01.067961   21498 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 \
	I0428 23:09:01.068000   21498 kubeadm.go:309] 	--control-plane 
	I0428 23:09:01.068009   21498 kubeadm.go:309] 
	I0428 23:09:01.068094   21498 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 23:09:01.068102   21498 kubeadm.go:309] 
	I0428 23:09:01.068185   21498 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xr6z4d.yarjymvs908hmll3 \
	I0428 23:09:01.068337   21498 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 
	I0428 23:09:01.068356   21498 cni.go:84] Creating CNI manager for ""
	I0428 23:09:01.068365   21498 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0428 23:09:01.070929   21498 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0428 23:09:01.072198   21498 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0428 23:09:01.085215   21498 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0428 23:09:01.105183   21498 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 23:09:01.105318   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:01.105336   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-971694 minikube.k8s.io/updated_at=2024_04_28T23_09_01_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=addons-971694 minikube.k8s.io/primary=true
	I0428 23:09:01.141482   21498 ops.go:34] apiserver oom_adj: -16
	I0428 23:09:01.271070   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:01.771860   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:02.271672   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:02.771313   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:03.271951   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:03.771514   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:04.271117   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:04.771094   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:05.271221   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:05.771845   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:06.271395   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:06.771705   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:07.271581   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:07.771917   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:08.271276   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:08.771461   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:09.272106   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:09.771607   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:10.271932   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:10.771954   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:11.271554   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:11.771910   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:12.271195   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:12.771778   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:13.271144   21498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:09:13.355556   21498 kubeadm.go:1107] duration metric: took 12.250308213s to wait for elevateKubeSystemPrivileges
	W0428 23:09:13.355594   21498 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 23:09:13.355604   21498 kubeadm.go:393] duration metric: took 23.393991559s to StartCluster
	I0428 23:09:13.355624   21498 settings.go:142] acquiring lock: {Name:mk4e6965347be51f4cd501030baea6b9cd2dbc9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:09:13.355770   21498 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:09:13.356324   21498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/kubeconfig: {Name:mk5412a370a0ddec304ff7697d6d137221e96742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:09:13.356541   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 23:09:13.356555   21498 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0428 23:09:13.356631   21498 addons.go:69] Setting yakd=true in profile "addons-971694"
	I0428 23:09:13.356536   21498 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:09:13.356666   21498 addons.go:234] Setting addon yakd=true in "addons-971694"
	I0428 23:09:13.359886   21498 out.go:177] * Verifying Kubernetes components...
	I0428 23:09:13.356678   21498 addons.go:69] Setting registry=true in profile "addons-971694"
	I0428 23:09:13.359939   21498 addons.go:234] Setting addon registry=true in "addons-971694"
	I0428 23:09:13.359976   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.356677   21498 addons.go:69] Setting ingress-dns=true in profile "addons-971694"
	I0428 23:09:13.360081   21498 addons.go:234] Setting addon ingress-dns=true in "addons-971694"
	I0428 23:09:13.356690   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.360146   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.356689   21498 addons.go:69] Setting inspektor-gadget=true in profile "addons-971694"
	I0428 23:09:13.360258   21498 addons.go:234] Setting addon inspektor-gadget=true in "addons-971694"
	I0428 23:09:13.356699   21498 addons.go:69] Setting storage-provisioner=true in profile "addons-971694"
	I0428 23:09:13.356698   21498 addons.go:69] Setting gcp-auth=true in profile "addons-971694"
	I0428 23:09:13.356703   21498 addons.go:69] Setting metrics-server=true in profile "addons-971694"
	I0428 23:09:13.356705   21498 addons.go:69] Setting helm-tiller=true in profile "addons-971694"
	I0428 23:09:13.356709   21498 addons.go:69] Setting ingress=true in profile "addons-971694"
	I0428 23:09:13.356712   21498 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-971694"
	I0428 23:09:13.356715   21498 addons.go:69] Setting cloud-spanner=true in profile "addons-971694"
	I0428 23:09:13.356713   21498 config.go:182] Loaded profile config "addons-971694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:09:13.356721   21498 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-971694"
	I0428 23:09:13.356723   21498 addons.go:69] Setting volumesnapshots=true in profile "addons-971694"
	I0428 23:09:13.356731   21498 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-971694"
	I0428 23:09:13.356737   21498 addons.go:69] Setting default-storageclass=true in profile "addons-971694"
	I0428 23:09:13.361857   21498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:09:13.360324   21498 addons.go:234] Setting addon metrics-server=true in "addons-971694"
	I0428 23:09:13.361963   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.360354   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.360376   21498 addons.go:234] Setting addon ingress=true in "addons-971694"
	I0428 23:09:13.362180   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.360383   21498 addons.go:234] Setting addon storage-provisioner=true in "addons-971694"
	I0428 23:09:13.362265   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.362310   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.360398   21498 addons.go:234] Setting addon helm-tiller=true in "addons-971694"
	I0428 23:09:13.362353   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.362370   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.362384   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.360410   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.360420   21498 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-971694"
	I0428 23:09:13.360426   21498 addons.go:234] Setting addon volumesnapshots=true in "addons-971694"
	I0428 23:09:13.360444   21498 addons.go:234] Setting addon cloud-spanner=true in "addons-971694"
	I0428 23:09:13.360443   21498 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-971694"
	I0428 23:09:13.360449   21498 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-971694"
	I0428 23:09:13.360415   21498 mustload.go:65] Loading cluster: addons-971694
	I0428 23:09:13.360458   21498 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-971694"
	I0428 23:09:13.360555   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.362820   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.362864   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.360619   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.362924   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.363213   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.363249   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.364152   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.364532   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.364574   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.364604   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.364973   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.364990   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.365053   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.365075   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.365132   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.365133   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.365139   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.365162   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.365249   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.365263   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.365285   21498 config.go:182] Loaded profile config "addons-971694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:09:13.365488   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.385937   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39963
	I0428 23:09:13.386313   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36005
	I0428 23:09:13.386564   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0428 23:09:13.386702   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.386945   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.387206   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0428 23:09:13.387222   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.387236   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.387360   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.387664   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.387975   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0428 23:09:13.388304   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.388367   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.388384   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.388443   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.388469   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.388674   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.388694   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.388773   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.388855   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.388960   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.388984   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.389453   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.389458   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.389501   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.389553   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.389576   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.389777   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.389962   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.390499   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.390543   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.390813   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.390847   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.390962   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.390989   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.391145   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.391186   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.398323   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.398348   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.398371   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.398376   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.418536   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.418577   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.420991   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0428 23:09:13.421441   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0428 23:09:13.421862   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.422489   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.422507   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.422886   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.422965   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.423412   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.423429   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.423492   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0428 23:09:13.423947   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.424648   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39045
	I0428 23:09:13.424778   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.425220   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.425248   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.425551   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.425559   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.425580   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.425755   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.425975   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.426246   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.426263   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.426763   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.426801   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.426965   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.427491   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.427523   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.430429   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I0428 23:09:13.430854   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.430980   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.433428   21498 out.go:177]   - Using image docker.io/registry:2.8.3
	I0428 23:09:13.431394   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.434979   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.436895   21498 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0428 23:09:13.438433   21498 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0428 23:09:13.438452   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0428 23:09:13.438473   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.436199   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I0428 23:09:13.437166   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.439041   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.439562   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.439586   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.439965   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.440568   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.441385   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0428 23:09:13.441701   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I0428 23:09:13.441872   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.441963   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.442263   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.442342   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.442357   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.442391   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.442587   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.442727   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.442882   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.443390   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.443522   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.443538   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.443605   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0428 23:09:13.443970   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.443995   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.444009   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.444195   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44841
	I0428 23:09:13.444385   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.444639   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.444657   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.445087   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.445119   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.445351   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0428 23:09:13.445469   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.445531   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.445583   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.447727   21498 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0428 23:09:13.446497   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.446515   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.446520   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.446602   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.449300   21498 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0428 23:09:13.449436   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.449823   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.450517   21498 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0428 23:09:13.450525   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.452145   21498 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0428 23:09:13.452164   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0428 23:09:13.452182   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.450550   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0428 23:09:13.452241   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.449990   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.452274   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.449871   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I0428 23:09:13.450648   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.452738   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.453351   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.453366   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.453387   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.453777   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.453987   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.454932   21498 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-971694"
	I0428 23:09:13.454968   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.455337   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.455369   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.455802   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.456388   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.456416   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.456449   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.456474   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.456689   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.456928   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.457028   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.457048   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.457180   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.457242   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.457602   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.457671   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.457932   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.459874   21498 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0428 23:09:13.458183   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.458335   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.458868   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.459049   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.460465   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I0428 23:09:13.461622   21498 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0428 23:09:13.461634   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0428 23:09:13.461650   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.462054   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.462092   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.462871   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.463798   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.464046   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0428 23:09:13.464815   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.465264   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.465286   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.465631   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.465834   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.466394   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.466927   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.466948   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.467121   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.467329   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.467557   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.467758   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.470427   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.470944   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.470961   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.471028   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0428 23:09:13.471939   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.472437   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.472453   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.472517   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.472717   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.472845   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.473263   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.473419   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.474453   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.476949   21498 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0428 23:09:13.474901   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.478794   21498 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0428 23:09:13.478807   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0428 23:09:13.478826   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.480505   21498 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0428 23:09:13.482056   21498 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0428 23:09:13.480443   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0428 23:09:13.482190   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.482975   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0428 23:09:13.483009   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.483116   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43167
	I0428 23:09:13.485383   21498 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0428 23:09:13.483860   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.485416   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.484104   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.484190   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I0428 23:09:13.484282   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.484333   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.484640   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I0428 23:09:13.485348   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.487091   21498 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0428 23:09:13.487112   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0428 23:09:13.487128   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.486176   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.486257   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.487216   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.486257   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.487260   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.486269   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.487293   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.486278   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.486289   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.487454   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.487753   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.487815   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.487955   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.488020   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.488865   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.488884   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.488947   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.489325   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.489522   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.489884   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.489948   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.489966   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.490336   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.490524   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.491033   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.492944   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.492946   21498 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0428 23:09:13.494610   21498 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0428 23:09:13.494627   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0428 23:09:13.492452   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.492476   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.492296   21498 addons.go:234] Setting addon default-storageclass=true in "addons-971694"
	I0428 23:09:13.494770   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:13.493203   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.493411   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.494884   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.493596   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.494642   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.496962   21498 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0428 23:09:13.498849   21498 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0428 23:09:13.495148   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.495371   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.497785   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.498215   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
	I0428 23:09:13.498393   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.499475   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0428 23:09:13.500386   21498 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0428 23:09:13.500427   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.501594   21498 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 23:09:13.501832   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.502848   21498 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0428 23:09:13.502859   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0428 23:09:13.502876   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.502926   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.502947   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.503134   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0428 23:09:13.503150   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.505019   21498 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 23:09:13.505037   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 23:09:13.503392   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.504107   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.505139   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.504145   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.504182   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.505675   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.505855   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.505869   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.506091   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.506342   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.506585   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.507374   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.507735   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.507753   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.508485   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.508547   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.508601   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.508625   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.508787   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.509037   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.509098   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.509154   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I0428 23:09:13.509357   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.509414   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.509468   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.509540   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.511693   21498 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0428 23:09:13.509897   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.510381   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.510567   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.511386   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.512173   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.513202   21498 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0428 23:09:13.513215   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0428 23:09:13.513218   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.513228   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.513202   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.513281   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.513293   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.513312   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.513773   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.513846   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.513897   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.513949   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.514332   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.514387   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.516542   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.516872   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.517076   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41823
	I0428 23:09:13.517520   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.518433   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.518453   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.518796   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.519086   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.519792   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.523153   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.523171   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.523199   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.523407   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.525461   21498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0428 23:09:13.523752   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.528569   21498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0428 23:09:13.526910   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43581
	I0428 23:09:13.527110   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.527181   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0428 23:09:13.531229   21498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0428 23:09:13.530163   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.530319   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.530348   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.533992   21498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0428 23:09:13.532930   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.533064   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.535970   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.537390   21498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0428 23:09:13.536013   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.536380   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.540014   21498 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0428 23:09:13.541359   21498 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0428 23:09:13.539058   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.539325   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:13.541422   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:13.541574   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.542929   21498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0428 23:09:13.544329   21498 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0428 23:09:13.544350   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0428 23:09:13.544411   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.544675   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.546340   21498 out.go:177]   - Using image docker.io/busybox:stable
	I0428 23:09:13.547971   21498 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0428 23:09:13.549413   21498 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0428 23:09:13.549428   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0428 23:09:13.547214   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.549476   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.549506   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.547829   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.549444   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.549710   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.549889   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.550080   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.552217   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.552554   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.552575   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.552798   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.552957   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.553098   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.553196   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	W0428 23:09:13.571306   21498 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33742->192.168.39.130:22: read: connection reset by peer
	I0428 23:09:13.571338   21498 retry.go:31] will retry after 176.118084ms: ssh: handshake failed: read tcp 192.168.39.1:33742->192.168.39.130:22: read: connection reset by peer
	W0428 23:09:13.571401   21498 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33744->192.168.39.130:22: read: connection reset by peer
	I0428 23:09:13.571413   21498 retry.go:31] will retry after 175.13825ms: ssh: handshake failed: read tcp 192.168.39.1:33744->192.168.39.130:22: read: connection reset by peer
	I0428 23:09:13.585983   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0428 23:09:13.586352   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:13.586845   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:13.586863   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:13.587217   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:13.587428   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:13.589193   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:13.589455   21498 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 23:09:13.589475   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 23:09:13.589493   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:13.592458   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.592946   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:13.592980   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:13.593140   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:13.593341   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:13.593501   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:13.593654   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:13.879846   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0428 23:09:13.944752   21498 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0428 23:09:13.944771   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0428 23:09:13.956191   21498 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0428 23:09:13.956211   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0428 23:09:13.966341   21498 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0428 23:09:13.966367   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0428 23:09:13.989054   21498 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0428 23:09:13.989076   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0428 23:09:14.038968   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 23:09:14.043333   21498 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0428 23:09:14.043355   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0428 23:09:14.084246   21498 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0428 23:09:14.084273   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0428 23:09:14.115357   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0428 23:09:14.117901   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0428 23:09:14.137967   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 23:09:14.140357   21498 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0428 23:09:14.140376   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0428 23:09:14.144876   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0428 23:09:14.227233   21498 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0428 23:09:14.227257   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0428 23:09:14.229696   21498 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0428 23:09:14.229721   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0428 23:09:14.241056   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 23:09:14.241087   21498 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:09:14.263223   21498 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0428 23:09:14.263244   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0428 23:09:14.275570   21498 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0428 23:09:14.275588   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0428 23:09:14.297643   21498 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0428 23:09:14.297668   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0428 23:09:14.502491   21498 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0428 23:09:14.502514   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0428 23:09:14.513521   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0428 23:09:14.517258   21498 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0428 23:09:14.517281   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0428 23:09:14.531349   21498 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0428 23:09:14.531370   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0428 23:09:14.646376   21498 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0428 23:09:14.646414   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0428 23:09:14.742803   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0428 23:09:14.795310   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0428 23:09:14.843751   21498 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0428 23:09:14.843776   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0428 23:09:14.844246   21498 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0428 23:09:14.844262   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0428 23:09:14.869145   21498 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0428 23:09:14.869169   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0428 23:09:14.911799   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0428 23:09:15.028313   21498 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0428 23:09:15.028341   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0428 23:09:15.087379   21498 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0428 23:09:15.087404   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0428 23:09:15.199567   21498 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0428 23:09:15.199602   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0428 23:09:15.207618   21498 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0428 23:09:15.207645   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0428 23:09:15.410200   21498 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0428 23:09:15.410227   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0428 23:09:15.542511   21498 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0428 23:09:15.542552   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0428 23:09:15.609985   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0428 23:09:15.630551   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0428 23:09:15.722664   21498 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0428 23:09:15.722684   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0428 23:09:15.765185   21498 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0428 23:09:15.765208   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0428 23:09:16.051774   21498 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0428 23:09:16.051796   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0428 23:09:16.066176   21498 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0428 23:09:16.066196   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0428 23:09:16.240104   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0428 23:09:16.402087   21498 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0428 23:09:16.402111   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0428 23:09:16.858118   21498 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0428 23:09:16.858145   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0428 23:09:17.063860   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.183974178s)
	I0428 23:09:17.063901   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:17.063915   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:17.064186   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:17.064221   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:17.064236   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:17.064251   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:17.064264   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:17.064471   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:17.064485   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:17.293813   21498 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0428 23:09:17.293840   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0428 23:09:17.649368   21498 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0428 23:09:17.649391   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0428 23:09:18.054697   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0428 23:09:19.601318   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.562318545s)
	I0428 23:09:19.601354   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:19.601367   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:19.601653   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:19.601695   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:19.601711   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:19.601720   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:19.601745   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:19.602062   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:19.602079   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:20.535470   21498 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0428 23:09:20.535508   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:20.538921   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:20.539373   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:20.539410   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:20.539594   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:20.539814   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:20.540011   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:20.540183   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:21.276017   21498 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0428 23:09:21.973483   21498 addons.go:234] Setting addon gcp-auth=true in "addons-971694"
	I0428 23:09:21.973547   21498 host.go:66] Checking if "addons-971694" exists ...
	I0428 23:09:21.973850   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:21.973880   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:21.988620   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0428 23:09:21.989037   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:21.989492   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:21.989517   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:21.989825   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:21.990283   21498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:09:21.990310   21498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:09:22.005591   21498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0428 23:09:22.005983   21498 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:09:22.006464   21498 main.go:141] libmachine: Using API Version  1
	I0428 23:09:22.006493   21498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:09:22.006809   21498 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:09:22.007018   21498 main.go:141] libmachine: (addons-971694) Calling .GetState
	I0428 23:09:22.008454   21498 main.go:141] libmachine: (addons-971694) Calling .DriverName
	I0428 23:09:22.008678   21498 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0428 23:09:22.008705   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHHostname
	I0428 23:09:22.010979   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:22.011401   21498 main.go:141] libmachine: (addons-971694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e2:9e", ip: ""} in network mk-addons-971694: {Iface:virbr1 ExpiryTime:2024-04-29 00:08:33 +0000 UTC Type:0 Mac:52:54:00:36:e2:9e Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-971694 Clientid:01:52:54:00:36:e2:9e}
	I0428 23:09:22.011427   21498 main.go:141] libmachine: (addons-971694) DBG | domain addons-971694 has defined IP address 192.168.39.130 and MAC address 52:54:00:36:e2:9e in network mk-addons-971694
	I0428 23:09:22.011675   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHPort
	I0428 23:09:22.011849   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHKeyPath
	I0428 23:09:22.012052   21498 main.go:141] libmachine: (addons-971694) Calling .GetSSHUsername
	I0428 23:09:22.012188   21498 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/addons-971694/id_rsa Username:docker}
	I0428 23:09:23.168238   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.052843924s)
	I0428 23:09:23.168276   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.030283197s)
	I0428 23:09:23.168296   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.168304   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.168308   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.168315   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.168324   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.023417966s)
	I0428 23:09:23.168256   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.050323893s)
	I0428 23:09:23.168352   21498 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.927239412s)
	I0428 23:09:23.168425   21498 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.927344802s)
	I0428 23:09:23.168448   21498 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0428 23:09:23.168494   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.654939561s)
	I0428 23:09:23.168521   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.168530   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.168542   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.425707539s)
	I0428 23:09:23.168592   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.373255082s)
	I0428 23:09:23.168612   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.168621   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.168594   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.168657   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.168661   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.256832522s)
	I0428 23:09:23.168358   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.168679   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.168687   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.168687   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.168802   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.558787427s)
	I0428 23:09:23.168357   21498 main.go:141] libmachine: Making call to close driver server
	W0428 23:09:23.168827   21498 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0428 23:09:23.168841   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.168870   21498 retry.go:31] will retry after 136.919143ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0428 23:09:23.168926   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.538349955s)
	I0428 23:09:23.168944   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.168952   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.168994   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.928854699s)
	I0428 23:09:23.169013   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.169022   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.169377   21498 node_ready.go:35] waiting up to 6m0s for node "addons-971694" to be "Ready" ...
	I0428 23:09:23.169457   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.169477   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.169485   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.169496   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.169501   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.169505   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.169508   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.169515   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.169533   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.169543   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.169548   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.169550   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.169564   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.169578   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.169586   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.169593   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.169563   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.169613   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.169626   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.169641   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.169647   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.169654   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.169661   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.169704   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.169710   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.169717   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.169725   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.169757   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.169774   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.169781   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.169789   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.169795   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.169835   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.169851   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.169858   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.169864   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.169872   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.170362   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.170378   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.170387   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.170393   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.171097   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.171134   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.171141   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.169516   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.171167   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.171476   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.171518   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.171526   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.171534   21498 addons.go:470] Verifying addon registry=true in "addons-971694"
	I0428 23:09:23.174706   21498 out.go:177] * Verifying registry addon...
	I0428 23:09:23.171789   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.171813   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.171832   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.171883   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.171901   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.171920   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.171934   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.171963   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.171980   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.171994   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.172916   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.172971   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.172997   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.173526   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.169599   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.176751   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.176763   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.176771   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.176793   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.176805   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.176813   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.176814   21498 addons.go:470] Verifying addon metrics-server=true in "addons-971694"
	I0428 23:09:23.176841   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.176848   21498 addons.go:470] Verifying addon ingress=true in "addons-971694"
	I0428 23:09:23.176865   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.178341   21498 out.go:177] * Verifying ingress addon...
	I0428 23:09:23.176797   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.177128   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.177129   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.177129   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.177159   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.177587   21498 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0428 23:09:23.179736   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.179761   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.181063   21498 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-971694 service yakd-dashboard -n yakd-dashboard
	
	I0428 23:09:23.180322   21498 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0428 23:09:23.258580   21498 node_ready.go:49] node "addons-971694" has status "Ready":"True"
	I0428 23:09:23.258614   21498 node_ready.go:38] duration metric: took 89.218218ms for node "addons-971694" to be "Ready" ...
	I0428 23:09:23.258627   21498 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 23:09:23.275597   21498 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0428 23:09:23.275621   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:23.276346   21498 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0428 23:09:23.276373   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:23.294185   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.294204   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.294474   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:23.294478   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.294502   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	W0428 23:09:23.294596   21498 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I0428 23:09:23.297026   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:23.297042   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:23.297302   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:23.297322   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:23.306267   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0428 23:09:23.328678   21498 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n5mdf" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.390126   21498 pod_ready.go:92] pod "coredns-7db6d8ff4d-n5mdf" in "kube-system" namespace has status "Ready":"True"
	I0428 23:09:23.390151   21498 pod_ready.go:81] duration metric: took 61.439686ms for pod "coredns-7db6d8ff4d-n5mdf" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.390164   21498 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-r2lfm" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.424398   21498 pod_ready.go:92] pod "coredns-7db6d8ff4d-r2lfm" in "kube-system" namespace has status "Ready":"True"
	I0428 23:09:23.424433   21498 pod_ready.go:81] duration metric: took 34.259839ms for pod "coredns-7db6d8ff4d-r2lfm" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.424446   21498 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-971694" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.442848   21498 pod_ready.go:92] pod "etcd-addons-971694" in "kube-system" namespace has status "Ready":"True"
	I0428 23:09:23.442873   21498 pod_ready.go:81] duration metric: took 18.419565ms for pod "etcd-addons-971694" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.442886   21498 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-971694" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.484045   21498 pod_ready.go:92] pod "kube-apiserver-addons-971694" in "kube-system" namespace has status "Ready":"True"
	I0428 23:09:23.484072   21498 pod_ready.go:81] duration metric: took 41.176278ms for pod "kube-apiserver-addons-971694" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.484086   21498 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-971694" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.572927   21498 pod_ready.go:92] pod "kube-controller-manager-addons-971694" in "kube-system" namespace has status "Ready":"True"
	I0428 23:09:23.572952   21498 pod_ready.go:81] duration metric: took 88.858818ms for pod "kube-controller-manager-addons-971694" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.572964   21498 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7rzct" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.675780   21498 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-971694" context rescaled to 1 replicas
	I0428 23:09:23.686203   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:23.692016   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:23.972571   21498 pod_ready.go:92] pod "kube-proxy-7rzct" in "kube-system" namespace has status "Ready":"True"
	I0428 23:09:23.972604   21498 pod_ready.go:81] duration metric: took 399.632205ms for pod "kube-proxy-7rzct" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:23.972615   21498 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-971694" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:24.185351   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:24.186659   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:24.373490   21498 pod_ready.go:92] pod "kube-scheduler-addons-971694" in "kube-system" namespace has status "Ready":"True"
	I0428 23:09:24.373517   21498 pod_ready.go:81] duration metric: took 400.894182ms for pod "kube-scheduler-addons-971694" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:24.373529   21498 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace to be "Ready" ...
	I0428 23:09:24.687780   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:24.688695   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:25.189129   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:25.197047   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:25.698741   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:25.698805   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:25.954864   21498 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.946159954s)
	I0428 23:09:25.956600   21498 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0428 23:09:25.954989   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.648689295s)
	I0428 23:09:25.957107   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.90237149s)
	I0428 23:09:25.958102   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:25.958122   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:25.959848   21498 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0428 23:09:25.958124   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:25.959882   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:25.958418   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:25.959932   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:25.959957   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:25.959972   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:25.958445   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:25.961723   21498 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0428 23:09:25.961745   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0428 23:09:25.960312   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:25.960319   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:25.960284   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:25.961794   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:25.961804   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:25.961811   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:25.960338   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:25.961857   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:25.961866   21498 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-971694"
	I0428 23:09:25.963994   21498 out.go:177] * Verifying csi-hostpath-driver addon...
	I0428 23:09:25.962038   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:25.962052   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:25.965291   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:25.965878   21498 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0428 23:09:26.006671   21498 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0428 23:09:26.006695   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:26.068603   21498 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0428 23:09:26.068637   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0428 23:09:26.122426   21498 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0428 23:09:26.122452   21498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0428 23:09:26.174260   21498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0428 23:09:26.184918   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:26.188302   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:26.380525   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:26.472551   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:26.684512   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:26.687449   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:26.971403   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:27.189400   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:27.191119   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:27.489763   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:27.601005   21498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.426684753s)
	I0428 23:09:27.601060   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:27.601069   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:27.601308   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:27.601325   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:27.601335   21498 main.go:141] libmachine: Making call to close driver server
	I0428 23:09:27.601342   21498 main.go:141] libmachine: (addons-971694) Calling .Close
	I0428 23:09:27.601345   21498 main.go:141] libmachine: (addons-971694) DBG | Closing plugin on server side
	I0428 23:09:27.601610   21498 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:09:27.601624   21498 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:09:27.603410   21498 addons.go:470] Verifying addon gcp-auth=true in "addons-971694"
	I0428 23:09:27.605286   21498 out.go:177] * Verifying gcp-auth addon...
	I0428 23:09:27.606922   21498 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0428 23:09:27.624456   21498 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0428 23:09:27.624482   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:27.686553   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:27.688787   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:27.972394   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:28.110537   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:28.185382   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:28.187548   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:28.472052   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:28.611027   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:28.684679   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:28.687974   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:28.886141   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:28.971493   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:29.111582   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:29.185359   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:29.188102   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:29.472053   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:29.611066   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:29.684280   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:29.686875   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:29.971912   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:30.111242   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:30.185413   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:30.187434   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:30.472884   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:30.611374   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:30.688557   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:30.688706   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:30.903750   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:30.975608   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:31.110880   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:31.186159   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:31.188051   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:31.472297   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:31.611076   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:31.686600   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:31.688707   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:31.971776   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:32.110769   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:32.189454   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:32.189759   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:32.473019   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:32.611011   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:32.685357   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:32.688542   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:32.971940   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:33.111063   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:33.185733   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:33.187009   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:33.379209   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:33.472163   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:33.610291   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:33.685958   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:33.687866   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:33.971236   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:34.111088   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:34.185697   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:34.187240   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:34.471247   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:34.611406   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:34.686674   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:34.687098   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:34.971806   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:35.111159   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:35.184510   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:35.186988   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:35.476202   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:35.611383   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:35.684701   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:35.687497   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:35.883906   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:35.971494   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:36.111990   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:36.187859   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:36.189267   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:36.479736   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:36.610064   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:36.684395   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:36.687182   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:36.973128   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:37.110515   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:37.185619   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:37.193234   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:37.479674   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:37.610937   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:37.684263   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:37.688945   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:37.971406   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:38.111786   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:38.185341   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:38.186851   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:38.379392   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:38.472204   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:38.611256   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:38.685995   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:38.691268   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:38.971456   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:39.111539   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:39.186235   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:39.188284   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:39.472452   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:39.611434   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:39.834864   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:39.835146   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:39.972013   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:40.111683   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:40.186906   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:40.191481   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:40.380284   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:40.471986   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:40.611353   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:40.699494   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:40.714421   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:40.972285   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:41.110624   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:41.189441   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:41.191605   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:41.472353   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:41.613200   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:41.941371   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:41.944203   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:41.985761   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:42.111597   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:42.185847   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:42.188837   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:42.380776   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:42.474195   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:42.611263   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:42.687301   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:42.687688   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:42.973149   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:43.110656   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:43.184658   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:43.188294   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:43.471855   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:43.611373   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:43.685529   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:43.689010   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:43.974915   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:44.110781   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:44.191109   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:44.191463   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:44.381596   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:44.472185   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:44.615703   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:44.685984   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:44.688133   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:44.972287   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:45.111621   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:45.185268   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:45.188182   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:45.471207   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:45.612929   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:45.684840   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:45.690627   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:45.973270   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:46.111028   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:46.184531   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:46.187305   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:46.473479   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:46.611284   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:46.687853   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:46.689920   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:46.878789   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:46.972484   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:47.112142   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:47.184502   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:47.186903   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:47.472732   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:47.610735   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:47.686464   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:47.689540   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:47.972588   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:48.113137   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:48.184782   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:48.187286   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:48.471637   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:48.610621   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:48.686118   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:48.688051   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:48.879463   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:48.971505   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:49.110368   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:49.184962   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:49.187182   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:49.474192   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:49.611599   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:49.685141   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:49.688064   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:49.975731   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:50.499976   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:50.504311   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:50.505639   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:50.505820   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:50.611175   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:50.684473   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:50.687371   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:50.882690   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:50.972629   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:51.111424   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:51.186619   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:51.188703   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:51.472267   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:51.612592   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:51.686884   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:51.689376   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:51.972225   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:52.111407   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:52.185025   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:52.186514   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:52.472766   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:52.611098   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:52.684509   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:52.687361   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:52.971551   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:53.111118   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:53.184479   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:53.187887   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:53.380454   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:53.477484   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:53.611755   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:53.686821   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:53.689069   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:53.972273   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:54.110896   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:54.185579   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:54.187561   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:54.471538   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:54.611075   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:54.687509   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:54.692124   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:54.970783   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:55.110414   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:55.185000   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:55.187374   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:55.385952   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:55.474803   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:55.611187   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:55.685281   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:55.687534   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:55.971713   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:56.110462   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:56.184904   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:56.189396   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:56.473457   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:56.611390   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:56.687830   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:56.689327   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:56.976524   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:57.111352   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:57.188959   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:57.189494   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:57.471389   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:57.611772   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:57.684865   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:57.686847   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:57.880077   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:57.975684   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:58.110595   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:58.185724   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:58.187327   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:58.476175   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:58.617617   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:58.695843   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:58.696136   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:58.973255   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:59.112263   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:59.184929   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:59.187873   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:59.472380   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:09:59.610946   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:09:59.693244   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:09:59.694067   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:09:59.880292   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:09:59.974661   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:00.111088   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:00.184454   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:00.187208   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:00.474115   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:01.014645   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:01.016074   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:01.018387   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:01.020468   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:01.110716   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:01.184955   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:01.190997   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:01.472849   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:01.611009   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:01.685410   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:01.687271   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:01.971414   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:02.111809   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:02.186865   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:02.188786   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:02.380152   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:02.470839   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:02.610692   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:02.685374   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:02.687320   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:02.975581   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:03.354731   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:03.355246   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:03.355420   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:03.472749   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:03.611276   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:03.690089   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:03.690897   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:03.973873   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:04.121667   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:04.187421   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:04.192549   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:04.389855   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:04.472007   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:04.611680   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:04.685338   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:04.687255   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:04.972720   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:05.111981   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:05.187996   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:05.189718   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:05.472009   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:05.610925   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:05.686612   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:05.687729   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:05.972286   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:06.111697   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:06.186154   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:06.190878   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:06.471810   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:06.610628   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:06.685329   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:06.688596   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:06.881297   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:06.972457   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:07.112499   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:07.184969   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:07.187666   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:07.471940   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:07.611212   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:07.685232   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:07.687513   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:07.971883   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:08.111288   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:08.185119   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:08.188184   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:08.472334   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:08.611480   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:08.685812   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:08.694589   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:08.974686   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:09.110836   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:09.185459   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:09.188803   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:09.381551   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:09.472281   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:09.611242   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:09.685410   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:09.686978   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:09.980460   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:10.111597   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:10.186149   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:10.187840   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:10.476726   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:10.611725   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:10.686333   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:10.689051   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:10.972497   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:11.111781   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:11.187284   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:11.188809   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:11.384026   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:11.482480   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:11.612767   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:11.684865   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:11.687546   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:11.971945   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:12.111572   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:12.185935   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:12.187445   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:12.472625   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:12.611923   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:12.686915   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:12.688568   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:12.971784   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:13.110704   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:13.185620   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:13.187293   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:13.471866   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:13.614497   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:13.687162   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:13.688652   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:13.880795   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:13.972779   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:14.114172   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:14.185915   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:14.187946   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:14.472909   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:14.611556   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:14.686017   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:14.690328   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:14.973388   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:15.111518   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:15.196583   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:15.214578   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:15.472197   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:15.610996   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:15.684541   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:15.687036   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:15.881824   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:15.972756   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:16.111258   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:16.186623   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:16.188679   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:16.473530   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:16.612376   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:16.685820   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0428 23:10:16.687149   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:16.971261   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:17.115903   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:17.187187   21498 kapi.go:107] duration metric: took 54.009596475s to wait for kubernetes.io/minikube-addons=registry ...
	I0428 23:10:17.189679   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:17.481390   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:17.988003   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:17.988051   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:17.989176   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:17.992807   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:18.110899   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:18.187202   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:18.473685   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:18.611535   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:18.688731   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:18.972645   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:19.113840   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:19.187795   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:19.472672   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:19.612074   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:19.687945   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:19.972842   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:20.114766   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:20.187988   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:20.381371   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:20.472462   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:20.611107   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:20.687729   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:20.972132   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:21.111054   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:21.186721   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:21.472133   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:21.611129   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:21.687500   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:21.972648   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:22.111126   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:22.188091   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:22.473173   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:22.611121   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:22.687530   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:22.879966   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:22.972060   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:23.111308   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:23.187631   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:23.471887   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:23.910559   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:23.911465   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:23.971579   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:24.111154   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:24.186776   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:24.472151   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:24.612037   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:24.695095   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:24.881248   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:24.971456   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:25.111561   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:25.189887   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:25.474495   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:25.611694   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:25.686717   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:25.971591   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:26.112780   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:26.186473   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:26.471984   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:26.611297   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:26.687927   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:27.074017   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:27.077604   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:27.112202   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:27.187820   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:27.476333   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:27.612790   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:27.700190   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:27.985798   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:28.110239   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:28.187120   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:28.472915   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:28.611194   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:28.687676   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:28.972226   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:29.111250   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:29.187753   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:29.379477   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:29.471625   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:29.611258   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:29.688102   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:29.977674   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:30.110563   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:30.188005   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:30.471772   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:30.613480   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:30.695788   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:30.972226   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:31.111395   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:31.187293   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:31.484215   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:31.613139   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:31.688357   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:31.881165   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:31.975962   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:32.111550   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:32.187361   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:32.473624   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:32.611772   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:32.695913   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:32.972169   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:33.111321   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:33.187549   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:33.507783   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:33.616607   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:33.687531   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:33.889811   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:33.976961   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:34.114324   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:34.187524   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:34.472803   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:34.611580   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:34.686473   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:34.971941   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:35.111075   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:35.187057   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:35.472379   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:35.629066   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:35.703054   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:36.151123   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:36.186912   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:36.199317   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:36.200267   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:36.470843   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:36.610987   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:36.686420   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:36.971508   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:37.111730   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:37.187623   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:37.471996   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:37.617415   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:37.704991   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:37.972471   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:38.111675   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:38.187282   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:38.379500   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:38.498212   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:38.611172   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:38.687645   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:38.971204   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:39.110301   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:39.187679   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:39.472418   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:39.611017   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:39.688513   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:39.978031   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:40.111687   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:40.187797   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:40.381048   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:40.472385   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:40.611159   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:40.688289   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:40.972137   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:41.110841   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:41.187791   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:41.471066   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0428 23:10:41.611388   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:41.687486   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:41.972415   21498 kapi.go:107] duration metric: took 1m16.006533351s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0428 23:10:42.111231   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:42.187973   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:42.381370   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:42.610987   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:42.687891   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:43.110619   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:43.187943   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:43.610804   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:43.687395   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:44.110491   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:44.187806   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:44.611413   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:44.688083   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:44.879294   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:45.111063   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:45.187268   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:45.611811   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:45.687754   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:46.111700   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:46.186874   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:46.611996   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:46.687619   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:46.881614   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:47.110978   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:47.188011   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:47.610416   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:47.688839   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:48.112309   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:48.188894   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:48.611643   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:48.690109   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:48.882485   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:49.111622   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:49.187028   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:49.611129   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:49.688312   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:50.111761   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:50.186939   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:50.611464   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:50.688221   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:51.111688   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:51.187189   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:51.392915   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:51.614533   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:51.689277   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:52.112202   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:52.188165   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:52.610501   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:52.688379   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:53.110523   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:53.187882   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:53.611364   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:53.688302   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:53.880582   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:54.112007   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:54.188473   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:54.611651   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:54.688008   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:55.111904   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:55.187855   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:55.611408   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:55.687800   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:56.111715   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:56.188456   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:56.379708   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:56.611499   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:56.696128   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:57.311088   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:57.311617   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:57.611609   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:57.686439   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:58.112009   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:58.188453   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:58.380853   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:10:58.611568   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:58.687965   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:59.111131   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:59.187519   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:10:59.611126   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:10:59.687115   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:00.111547   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:00.188265   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:00.612473   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:00.687226   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:00.880039   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:11:01.110503   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:01.187948   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:01.610890   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:01.687962   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:02.112634   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:02.187033   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:02.610819   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:02.696977   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:02.881492   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:11:03.112099   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:03.188378   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:03.610946   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:03.686508   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:04.112354   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:04.187679   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:04.611329   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:04.689456   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:05.111987   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:05.188130   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:05.380238   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:11:05.610885   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:05.687134   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:06.111487   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:06.189605   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:06.612177   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:06.688051   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:07.111156   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:07.188883   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:07.610483   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:07.688207   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:07.879038   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:11:08.111135   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:08.187474   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:08.611486   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:08.688938   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:09.111141   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:09.187321   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:09.610565   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:09.686745   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:09.880390   21498 pod_ready.go:102] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"False"
	I0428 23:11:10.113495   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:10.187521   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:10.382870   21498 pod_ready.go:92] pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace has status "Ready":"True"
	I0428 23:11:10.382892   21498 pod_ready.go:81] duration metric: took 1m46.00935532s for pod "metrics-server-c59844bb4-7s9h6" in "kube-system" namespace to be "Ready" ...
	I0428 23:11:10.382901   21498 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-s5btm" in "kube-system" namespace to be "Ready" ...
	I0428 23:11:10.388513   21498 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-s5btm" in "kube-system" namespace has status "Ready":"True"
	I0428 23:11:10.388530   21498 pod_ready.go:81] duration metric: took 5.623589ms for pod "nvidia-device-plugin-daemonset-s5btm" in "kube-system" namespace to be "Ready" ...
	I0428 23:11:10.388545   21498 pod_ready.go:38] duration metric: took 1m47.129905854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 23:11:10.388561   21498 api_server.go:52] waiting for apiserver process to appear ...
	I0428 23:11:10.388582   21498 cri.go:56] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0428 23:11:10.388627   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0428 23:11:10.472251   21498 cri.go:91] found id: "bb3ec6107ca73a6170ecae476773855cbeaaeda15e82a77a96df476fa00fe1d1"
	I0428 23:11:10.472274   21498 cri.go:91] found id: ""
	I0428 23:11:10.472281   21498 logs.go:276] 1 containers: [bb3ec6107ca73a6170ecae476773855cbeaaeda15e82a77a96df476fa00fe1d1]
	I0428 23:11:10.472337   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:10.477366   21498 cri.go:56] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0428 23:11:10.477413   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0428 23:11:10.560224   21498 cri.go:91] found id: "c0586470a89c04a40c0f62b71c006cbb1a9ce9bd9a90ce05bd635cbcb5cb45d2"
	I0428 23:11:10.560252   21498 cri.go:91] found id: ""
	I0428 23:11:10.560261   21498 logs.go:276] 1 containers: [c0586470a89c04a40c0f62b71c006cbb1a9ce9bd9a90ce05bd635cbcb5cb45d2]
	I0428 23:11:10.560316   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:10.566057   21498 cri.go:56] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0428 23:11:10.566134   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0428 23:11:10.604545   21498 cri.go:91] found id: "c0604f9db8c7c2f8ea765b357d006f270963bf6446d8efab4f93703659f405af"
	I0428 23:11:10.604570   21498 cri.go:91] found id: ""
	I0428 23:11:10.604579   21498 logs.go:276] 1 containers: [c0604f9db8c7c2f8ea765b357d006f270963bf6446d8efab4f93703659f405af]
	I0428 23:11:10.604637   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:10.611477   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:10.613408   21498 cri.go:56] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0428 23:11:10.613463   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0428 23:11:10.656567   21498 cri.go:91] found id: "fe73c801a68560ef3027515ab0e01a2661114504bef04858433810cf7013ee0b"
	I0428 23:11:10.656582   21498 cri.go:91] found id: ""
	I0428 23:11:10.656589   21498 logs.go:276] 1 containers: [fe73c801a68560ef3027515ab0e01a2661114504bef04858433810cf7013ee0b]
	I0428 23:11:10.656639   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:10.662048   21498 cri.go:56] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0428 23:11:10.662097   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0428 23:11:10.687276   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:10.721284   21498 cri.go:91] found id: "9afa18bdcc92af0e30babd9f4af0ab94774e29bafff6f344a2fa20e34f863877"
	I0428 23:11:10.721307   21498 cri.go:91] found id: ""
	I0428 23:11:10.721315   21498 logs.go:276] 1 containers: [9afa18bdcc92af0e30babd9f4af0ab94774e29bafff6f344a2fa20e34f863877]
	I0428 23:11:10.721370   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:10.726563   21498 cri.go:56] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0428 23:11:10.726627   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0428 23:11:10.772099   21498 cri.go:91] found id: "d714effcc4e35301c07ed693205f36a96d3bd7b7887fe9d5c77f28554a2b83f9"
	I0428 23:11:10.772121   21498 cri.go:91] found id: ""
	I0428 23:11:10.772129   21498 logs.go:276] 1 containers: [d714effcc4e35301c07ed693205f36a96d3bd7b7887fe9d5c77f28554a2b83f9]
	I0428 23:11:10.772177   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:10.777418   21498 cri.go:56] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0428 23:11:10.777492   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0428 23:11:10.824201   21498 cri.go:91] found id: ""
	I0428 23:11:10.824223   21498 logs.go:276] 0 containers: []
	W0428 23:11:10.824232   21498 logs.go:278] No container was found matching "kindnet"
	I0428 23:11:10.824243   21498 logs.go:123] Gathering logs for kube-controller-manager [d714effcc4e35301c07ed693205f36a96d3bd7b7887fe9d5c77f28554a2b83f9] ...
	I0428 23:11:10.824257   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d714effcc4e35301c07ed693205f36a96d3bd7b7887fe9d5c77f28554a2b83f9"
	I0428 23:11:10.885481   21498 logs.go:123] Gathering logs for kubelet ...
	I0428 23:11:10.885514   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0428 23:11:10.971574   21498 logs.go:123] Gathering logs for dmesg ...
	I0428 23:11:10.971612   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0428 23:11:10.988252   21498 logs.go:123] Gathering logs for describe nodes ...
	I0428 23:11:10.988281   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0428 23:11:11.110833   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:11.161576   21498 logs.go:123] Gathering logs for coredns [c0604f9db8c7c2f8ea765b357d006f270963bf6446d8efab4f93703659f405af] ...
	I0428 23:11:11.161603   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0604f9db8c7c2f8ea765b357d006f270963bf6446d8efab4f93703659f405af"
	I0428 23:11:11.187657   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:11.206267   21498 logs.go:123] Gathering logs for kube-scheduler [fe73c801a68560ef3027515ab0e01a2661114504bef04858433810cf7013ee0b] ...
	I0428 23:11:11.206293   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe73c801a68560ef3027515ab0e01a2661114504bef04858433810cf7013ee0b"
	I0428 23:11:11.256377   21498 logs.go:123] Gathering logs for kube-apiserver [bb3ec6107ca73a6170ecae476773855cbeaaeda15e82a77a96df476fa00fe1d1] ...
	I0428 23:11:11.256408   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb3ec6107ca73a6170ecae476773855cbeaaeda15e82a77a96df476fa00fe1d1"
	I0428 23:11:11.308143   21498 logs.go:123] Gathering logs for etcd [c0586470a89c04a40c0f62b71c006cbb1a9ce9bd9a90ce05bd635cbcb5cb45d2] ...
	I0428 23:11:11.308175   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0586470a89c04a40c0f62b71c006cbb1a9ce9bd9a90ce05bd635cbcb5cb45d2"
	I0428 23:11:11.379511   21498 logs.go:123] Gathering logs for kube-proxy [9afa18bdcc92af0e30babd9f4af0ab94774e29bafff6f344a2fa20e34f863877] ...
	I0428 23:11:11.379542   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afa18bdcc92af0e30babd9f4af0ab94774e29bafff6f344a2fa20e34f863877"
	I0428 23:11:11.433864   21498 logs.go:123] Gathering logs for CRI-O ...
	I0428 23:11:11.433893   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0428 23:11:11.611664   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:11.686453   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:12.111557   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:12.188129   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:12.435998   21498 logs.go:123] Gathering logs for container status ...
	I0428 23:11:12.436037   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0428 23:11:12.611059   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:12.688776   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:13.113941   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:13.187938   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:13.613058   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:13.688401   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:14.110846   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:14.187715   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:14.611301   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:14.688771   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:14.994465   21498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 23:11:15.015249   21498 api_server.go:72] duration metric: took 2m1.658575027s to wait for apiserver process to appear ...
	I0428 23:11:15.015281   21498 api_server.go:88] waiting for apiserver healthz status ...
	I0428 23:11:15.015317   21498 cri.go:56] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0428 23:11:15.015378   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0428 23:11:15.057622   21498 cri.go:91] found id: "bb3ec6107ca73a6170ecae476773855cbeaaeda15e82a77a96df476fa00fe1d1"
	I0428 23:11:15.057653   21498 cri.go:91] found id: ""
	I0428 23:11:15.057663   21498 logs.go:276] 1 containers: [bb3ec6107ca73a6170ecae476773855cbeaaeda15e82a77a96df476fa00fe1d1]
	I0428 23:11:15.057723   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:15.063917   21498 cri.go:56] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0428 23:11:15.063992   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0428 23:11:15.110582   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:15.114985   21498 cri.go:91] found id: "c0586470a89c04a40c0f62b71c006cbb1a9ce9bd9a90ce05bd635cbcb5cb45d2"
	I0428 23:11:15.115001   21498 cri.go:91] found id: ""
	I0428 23:11:15.115008   21498 logs.go:276] 1 containers: [c0586470a89c04a40c0f62b71c006cbb1a9ce9bd9a90ce05bd635cbcb5cb45d2]
	I0428 23:11:15.115050   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:15.124787   21498 cri.go:56] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0428 23:11:15.124859   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0428 23:11:15.172846   21498 cri.go:91] found id: "c0604f9db8c7c2f8ea765b357d006f270963bf6446d8efab4f93703659f405af"
	I0428 23:11:15.172865   21498 cri.go:91] found id: ""
	I0428 23:11:15.172872   21498 logs.go:276] 1 containers: [c0604f9db8c7c2f8ea765b357d006f270963bf6446d8efab4f93703659f405af]
	I0428 23:11:15.172917   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:15.177565   21498 cri.go:56] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0428 23:11:15.177630   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0428 23:11:15.188142   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:15.224112   21498 cri.go:91] found id: "fe73c801a68560ef3027515ab0e01a2661114504bef04858433810cf7013ee0b"
	I0428 23:11:15.224131   21498 cri.go:91] found id: ""
	I0428 23:11:15.224138   21498 logs.go:276] 1 containers: [fe73c801a68560ef3027515ab0e01a2661114504bef04858433810cf7013ee0b]
	I0428 23:11:15.224192   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:15.228717   21498 cri.go:56] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0428 23:11:15.228785   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0428 23:11:15.273650   21498 cri.go:91] found id: "9afa18bdcc92af0e30babd9f4af0ab94774e29bafff6f344a2fa20e34f863877"
	I0428 23:11:15.273668   21498 cri.go:91] found id: ""
	I0428 23:11:15.273675   21498 logs.go:276] 1 containers: [9afa18bdcc92af0e30babd9f4af0ab94774e29bafff6f344a2fa20e34f863877]
	I0428 23:11:15.273717   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:15.278646   21498 cri.go:56] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0428 23:11:15.278696   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0428 23:11:15.320984   21498 cri.go:91] found id: "d714effcc4e35301c07ed693205f36a96d3bd7b7887fe9d5c77f28554a2b83f9"
	I0428 23:11:15.320998   21498 cri.go:91] found id: ""
	I0428 23:11:15.321005   21498 logs.go:276] 1 containers: [d714effcc4e35301c07ed693205f36a96d3bd7b7887fe9d5c77f28554a2b83f9]
	I0428 23:11:15.321045   21498 ssh_runner.go:195] Run: which crictl
	I0428 23:11:15.325869   21498 cri.go:56] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0428 23:11:15.325930   21498 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0428 23:11:15.370746   21498 cri.go:91] found id: ""
	I0428 23:11:15.370780   21498 logs.go:276] 0 containers: []
	W0428 23:11:15.370791   21498 logs.go:278] No container was found matching "kindnet"
	I0428 23:11:15.370801   21498 logs.go:123] Gathering logs for kube-proxy [9afa18bdcc92af0e30babd9f4af0ab94774e29bafff6f344a2fa20e34f863877] ...
	I0428 23:11:15.370819   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afa18bdcc92af0e30babd9f4af0ab94774e29bafff6f344a2fa20e34f863877"
	I0428 23:11:15.411932   21498 logs.go:123] Gathering logs for dmesg ...
	I0428 23:11:15.411957   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0428 23:11:15.432524   21498 logs.go:123] Gathering logs for etcd [c0586470a89c04a40c0f62b71c006cbb1a9ce9bd9a90ce05bd635cbcb5cb45d2] ...
	I0428 23:11:15.432553   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0586470a89c04a40c0f62b71c006cbb1a9ce9bd9a90ce05bd635cbcb5cb45d2"
	I0428 23:11:15.501505   21498 logs.go:123] Gathering logs for coredns [c0604f9db8c7c2f8ea765b357d006f270963bf6446d8efab4f93703659f405af] ...
	I0428 23:11:15.501540   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0604f9db8c7c2f8ea765b357d006f270963bf6446d8efab4f93703659f405af"
	I0428 23:11:15.543068   21498 logs.go:123] Gathering logs for kube-scheduler [fe73c801a68560ef3027515ab0e01a2661114504bef04858433810cf7013ee0b] ...
	I0428 23:11:15.543101   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe73c801a68560ef3027515ab0e01a2661114504bef04858433810cf7013ee0b"
	I0428 23:11:15.588480   21498 logs.go:123] Gathering logs for CRI-O ...
	I0428 23:11:15.588518   21498 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0428 23:11:15.610791   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:15.687986   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:16.113747   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:16.188438   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:16.611468   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:16.689763   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:17.110930   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:17.187932   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:17.611718   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:17.686874   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:18.114487   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:18.188273   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:18.611956   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:18.687329   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:19.111083   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:19.188079   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:19.611269   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:19.688187   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:20.111723   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:20.187038   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:20.610891   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:20.687461   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:21.110859   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:21.188186   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:21.611718   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:21.686897   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:22.112895   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:22.187418   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:22.611561   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:22.687813   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:23.111381   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:23.187389   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:23.610824   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:23.689105   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:24.112035   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:24.194995   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:24.614123   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:24.688316   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:25.111157   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:25.187546   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:25.611151   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:25.688408   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:26.112103   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:26.188259   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:26.610409   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:26.687855   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:27.112341   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:27.187805   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:27.610920   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:27.687577   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:28.111148   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:28.187602   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:28.611142   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:28.689417   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:29.112505   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:29.187854   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:29.611354   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:29.687716   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:30.111557   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:30.188129   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:30.610974   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:30.687299   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:31.111518   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:31.188551   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:31.610602   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:31.686966   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:32.111402   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:32.187698   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:32.610933   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:32.687528   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:33.111442   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:33.188830   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:33.611944   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:33.688032   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:34.112612   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:34.188080   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:34.611809   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:34.688850   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:35.111629   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:35.187645   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:35.611073   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:35.688577   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:36.112372   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:36.188863   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:36.611019   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:36.687734   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:37.111000   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:37.187360   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:37.611358   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:37.688206   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:38.112051   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:38.188896   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:38.610286   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:38.687622   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:39.112414   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:39.187796   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:39.611056   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:39.689783   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:40.111103   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:40.187489   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:40.611105   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:40.687759   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:41.111846   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:41.187855   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:41.610924   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:41.686795   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:42.111212   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:42.188037   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:42.611377   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:42.693230   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:43.111388   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:43.187670   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:43.612253   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:43.687589   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:44.111258   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:44.188130   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:44.611097   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:44.687230   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:45.111212   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:45.187616   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:45.611099   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:45.687405   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:46.110241   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:46.191504   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:46.615025   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:46.689206   21498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0428 23:11:47.116214   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:47.189774   21498 kapi.go:107] duration metric: took 2m24.00944923s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0428 23:11:47.611422   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:48.115623   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:48.612975   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:49.110796   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:49.611529   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:50.111974   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:50.615222   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:51.111697   21498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0428 23:11:51.610787   21498 kapi.go:107] duration metric: took 2m24.003864048s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0428 23:11:51.612345   21498 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-971694 cluster.
	I0428 23:11:51.613589   21498 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0428 23:11:51.614792   21498 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0428 23:11:51.616107   21498 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, helm-tiller, metrics-server, inspektor-gadget, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0428 23:11:51.617323   21498 addons.go:505] duration metric: took 2m38.260762629s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin helm-tiller metrics-server inspektor-gadget cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-linux-amd64 start -p addons-971694 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 node stop m02 -v=7 --alsologtostderr
E0429 00:02:10.552590   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274394 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.48810058s)

                                                
                                                
-- stdout --
	* Stopping node "ha-274394-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:01:31.280131   40429 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:01:31.280308   40429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:01:31.280325   40429 out.go:304] Setting ErrFile to fd 2...
	I0429 00:01:31.280331   40429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:01:31.280613   40429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:01:31.280848   40429 mustload.go:65] Loading cluster: ha-274394
	I0429 00:01:31.282442   40429 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:01:31.282473   40429 stop.go:39] StopHost: ha-274394-m02
	I0429 00:01:31.282941   40429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:01:31.282976   40429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:01:31.298489   40429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36775
	I0429 00:01:31.298994   40429 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:01:31.299630   40429 main.go:141] libmachine: Using API Version  1
	I0429 00:01:31.299656   40429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:01:31.299963   40429 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:01:31.302121   40429 out.go:177] * Stopping node "ha-274394-m02"  ...
	I0429 00:01:31.303512   40429 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 00:01:31.303546   40429 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0429 00:01:31.303753   40429 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 00:01:31.303783   40429 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0429 00:01:31.306560   40429 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:01:31.307000   40429 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:01:31.307034   40429 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:01:31.307154   40429 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0429 00:01:31.307313   40429 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0429 00:01:31.307463   40429 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0429 00:01:31.307601   40429 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	I0429 00:01:31.391907   40429 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 00:01:31.447485   40429 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 00:01:31.505601   40429 main.go:141] libmachine: Stopping "ha-274394-m02"...
	I0429 00:01:31.505639   40429 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0429 00:01:31.507231   40429 main.go:141] libmachine: (ha-274394-m02) Calling .Stop
	I0429 00:01:31.510227   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 0/120
	I0429 00:01:32.512498   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 1/120
	I0429 00:01:33.513919   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 2/120
	I0429 00:01:34.515132   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 3/120
	I0429 00:01:35.516643   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 4/120
	I0429 00:01:36.518259   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 5/120
	I0429 00:01:37.520611   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 6/120
	I0429 00:01:38.521999   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 7/120
	I0429 00:01:39.523282   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 8/120
	I0429 00:01:40.524636   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 9/120
	I0429 00:01:41.526579   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 10/120
	I0429 00:01:42.527825   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 11/120
	I0429 00:01:43.529315   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 12/120
	I0429 00:01:44.530719   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 13/120
	I0429 00:01:45.532103   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 14/120
	I0429 00:01:46.534384   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 15/120
	I0429 00:01:47.536515   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 16/120
	I0429 00:01:48.538628   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 17/120
	I0429 00:01:49.540387   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 18/120
	I0429 00:01:50.542213   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 19/120
	I0429 00:01:51.544328   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 20/120
	I0429 00:01:52.545706   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 21/120
	I0429 00:01:53.547019   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 22/120
	I0429 00:01:54.549138   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 23/120
	I0429 00:01:55.550570   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 24/120
	I0429 00:01:56.551941   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 25/120
	I0429 00:01:57.553444   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 26/120
	I0429 00:01:58.554986   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 27/120
	I0429 00:01:59.556549   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 28/120
	I0429 00:02:00.557994   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 29/120
	I0429 00:02:01.559525   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 30/120
	I0429 00:02:02.561214   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 31/120
	I0429 00:02:03.562570   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 32/120
	I0429 00:02:04.563947   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 33/120
	I0429 00:02:05.565385   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 34/120
	I0429 00:02:06.567528   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 35/120
	I0429 00:02:07.569160   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 36/120
	I0429 00:02:08.570548   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 37/120
	I0429 00:02:09.572666   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 38/120
	I0429 00:02:10.573983   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 39/120
	I0429 00:02:11.576073   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 40/120
	I0429 00:02:12.577630   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 41/120
	I0429 00:02:13.579759   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 42/120
	I0429 00:02:14.581215   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 43/120
	I0429 00:02:15.582537   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 44/120
	I0429 00:02:16.584785   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 45/120
	I0429 00:02:17.586200   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 46/120
	I0429 00:02:18.588636   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 47/120
	I0429 00:02:19.590276   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 48/120
	I0429 00:02:20.592733   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 49/120
	I0429 00:02:21.594984   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 50/120
	I0429 00:02:22.596284   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 51/120
	I0429 00:02:23.597595   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 52/120
	I0429 00:02:24.599152   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 53/120
	I0429 00:02:25.600748   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 54/120
	I0429 00:02:26.602668   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 55/120
	I0429 00:02:27.604749   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 56/120
	I0429 00:02:28.606482   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 57/120
	I0429 00:02:29.608015   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 58/120
	I0429 00:02:30.610159   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 59/120
	I0429 00:02:31.612353   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 60/120
	I0429 00:02:32.613575   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 61/120
	I0429 00:02:33.615633   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 62/120
	I0429 00:02:34.616877   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 63/120
	I0429 00:02:35.618638   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 64/120
	I0429 00:02:36.620582   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 65/120
	I0429 00:02:37.621882   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 66/120
	I0429 00:02:38.623126   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 67/120
	I0429 00:02:39.624646   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 68/120
	I0429 00:02:40.626143   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 69/120
	I0429 00:02:41.628252   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 70/120
	I0429 00:02:42.629707   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 71/120
	I0429 00:02:43.631072   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 72/120
	I0429 00:02:44.632490   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 73/120
	I0429 00:02:45.634574   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 74/120
	I0429 00:02:46.636611   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 75/120
	I0429 00:02:47.637899   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 76/120
	I0429 00:02:48.639539   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 77/120
	I0429 00:02:49.641390   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 78/120
	I0429 00:02:50.642862   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 79/120
	I0429 00:02:51.644660   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 80/120
	I0429 00:02:52.645890   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 81/120
	I0429 00:02:53.647759   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 82/120
	I0429 00:02:54.648927   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 83/120
	I0429 00:02:55.650383   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 84/120
	I0429 00:02:56.652306   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 85/120
	I0429 00:02:57.653701   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 86/120
	I0429 00:02:58.655536   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 87/120
	I0429 00:02:59.657121   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 88/120
	I0429 00:03:00.659073   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 89/120
	I0429 00:03:01.661354   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 90/120
	I0429 00:03:02.662886   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 91/120
	I0429 00:03:03.664406   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 92/120
	I0429 00:03:04.665993   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 93/120
	I0429 00:03:05.667605   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 94/120
	I0429 00:03:06.669095   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 95/120
	I0429 00:03:07.670733   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 96/120
	I0429 00:03:08.672157   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 97/120
	I0429 00:03:09.673612   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 98/120
	I0429 00:03:10.674900   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 99/120
	I0429 00:03:11.676528   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 100/120
	I0429 00:03:12.678351   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 101/120
	I0429 00:03:13.680563   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 102/120
	I0429 00:03:14.682157   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 103/120
	I0429 00:03:15.684577   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 104/120
	I0429 00:03:16.686743   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 105/120
	I0429 00:03:17.687987   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 106/120
	I0429 00:03:18.689663   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 107/120
	I0429 00:03:19.691027   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 108/120
	I0429 00:03:20.692562   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 109/120
	I0429 00:03:21.694740   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 110/120
	I0429 00:03:22.696516   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 111/120
	I0429 00:03:23.697788   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 112/120
	I0429 00:03:24.700004   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 113/120
	I0429 00:03:25.701371   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 114/120
	I0429 00:03:26.703088   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 115/120
	I0429 00:03:27.705344   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 116/120
	I0429 00:03:28.706744   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 117/120
	I0429 00:03:29.708415   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 118/120
	I0429 00:03:30.710277   40429 main.go:141] libmachine: (ha-274394-m02) Waiting for machine to stop 119/120
	I0429 00:03:31.711513   40429 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 00:03:31.711646   40429 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-274394 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
E0429 00:03:32.473145   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr: exit status 3 (19.193704843s)

                                                
                                                
-- stdout --
	ha-274394
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-274394-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:03:31.767728   40871 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:03:31.767827   40871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:03:31.767832   40871 out.go:304] Setting ErrFile to fd 2...
	I0429 00:03:31.767835   40871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:03:31.768031   40871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:03:31.768199   40871 out.go:298] Setting JSON to false
	I0429 00:03:31.768223   40871 mustload.go:65] Loading cluster: ha-274394
	I0429 00:03:31.768279   40871 notify.go:220] Checking for updates...
	I0429 00:03:31.768594   40871 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:03:31.768611   40871 status.go:255] checking status of ha-274394 ...
	I0429 00:03:31.768965   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:31.769019   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:31.785410   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0429 00:03:31.785939   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:31.786545   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:31.786581   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:31.786895   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:31.787131   40871 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0429 00:03:31.788891   40871 status.go:330] ha-274394 host status = "Running" (err=<nil>)
	I0429 00:03:31.788909   40871 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:03:31.789217   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:31.789258   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:31.803817   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44269
	I0429 00:03:31.804278   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:31.804751   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:31.804774   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:31.805169   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:31.805390   40871 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:03:31.808163   40871 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:03:31.808629   40871 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:03:31.808658   40871 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:03:31.808792   40871 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:03:31.809073   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:31.809118   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:31.825277   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0429 00:03:31.825744   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:31.826287   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:31.826315   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:31.826612   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:31.826799   40871 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:03:31.827000   40871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:03:31.827036   40871 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:03:31.829980   40871 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:03:31.830456   40871 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:03:31.830484   40871 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:03:31.830642   40871 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:03:31.830819   40871 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:03:31.830991   40871 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:03:31.831134   40871 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:03:31.916280   40871 ssh_runner.go:195] Run: systemctl --version
	I0429 00:03:31.924688   40871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:03:31.944490   40871 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:03:31.944528   40871 api_server.go:166] Checking apiserver status ...
	I0429 00:03:31.944571   40871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:03:31.965259   40871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0429 00:03:31.976273   40871 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:03:31.976320   40871 ssh_runner.go:195] Run: ls
	I0429 00:03:31.981405   40871 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:03:31.988215   40871 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:03:31.988235   40871 status.go:422] ha-274394 apiserver status = Running (err=<nil>)
	I0429 00:03:31.988245   40871 status.go:257] ha-274394 status: &{Name:ha-274394 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:03:31.988630   40871 status.go:255] checking status of ha-274394-m02 ...
	I0429 00:03:31.989128   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:31.989164   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:32.004697   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37363
	I0429 00:03:32.005188   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:32.005615   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:32.005635   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:32.005971   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:32.006180   40871 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0429 00:03:32.007639   40871 status.go:330] ha-274394-m02 host status = "Running" (err=<nil>)
	I0429 00:03:32.007653   40871 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:03:32.007910   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:32.007940   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:32.022171   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42773
	I0429 00:03:32.022588   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:32.023056   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:32.023082   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:32.023414   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:32.023574   40871 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0429 00:03:32.026004   40871 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:03:32.026442   40871 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:03:32.026469   40871 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:03:32.026550   40871 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:03:32.026822   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:32.026858   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:32.041651   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0429 00:03:32.042055   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:32.042502   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:32.042525   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:32.042801   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:32.042995   40871 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0429 00:03:32.043153   40871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:03:32.043169   40871 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0429 00:03:32.045771   40871 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:03:32.046192   40871 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:03:32.046217   40871 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:03:32.046375   40871 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0429 00:03:32.046522   40871 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0429 00:03:32.046667   40871 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0429 00:03:32.046753   40871 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	W0429 00:03:50.534198   40871 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.43:22: connect: no route to host
	W0429 00:03:50.534289   40871 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	E0429 00:03:50.534304   40871 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:03:50.534311   40871 status.go:257] ha-274394-m02 status: &{Name:ha-274394-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 00:03:50.534329   40871 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:03:50.534336   40871 status.go:255] checking status of ha-274394-m03 ...
	I0429 00:03:50.534712   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:50.534760   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:50.549671   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41131
	I0429 00:03:50.550094   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:50.550585   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:50.550611   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:50.550927   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:50.551124   40871 main.go:141] libmachine: (ha-274394-m03) Calling .GetState
	I0429 00:03:50.552819   40871 status.go:330] ha-274394-m03 host status = "Running" (err=<nil>)
	I0429 00:03:50.552832   40871 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:03:50.553144   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:50.553186   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:50.567804   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38273
	I0429 00:03:50.568165   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:50.568667   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:50.568687   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:50.568957   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:50.569130   40871 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0429 00:03:50.571320   40871 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:03:50.571724   40871 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:03:50.571751   40871 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:03:50.571851   40871 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:03:50.572227   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:50.572296   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:50.585906   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36893
	I0429 00:03:50.586296   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:50.586747   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:50.586769   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:50.587030   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:50.587217   40871 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0429 00:03:50.587389   40871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:03:50.587408   40871 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0429 00:03:50.589853   40871 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:03:50.590237   40871 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:03:50.590264   40871 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:03:50.590389   40871 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0429 00:03:50.590559   40871 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0429 00:03:50.590708   40871 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0429 00:03:50.590819   40871 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0429 00:03:50.679783   40871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:03:50.699022   40871 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:03:50.699050   40871 api_server.go:166] Checking apiserver status ...
	I0429 00:03:50.699089   40871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:03:50.717524   40871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0429 00:03:50.727767   40871 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:03:50.727835   40871 ssh_runner.go:195] Run: ls
	I0429 00:03:50.732913   40871 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:03:50.738840   40871 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:03:50.738865   40871 status.go:422] ha-274394-m03 apiserver status = Running (err=<nil>)
	I0429 00:03:50.738891   40871 status.go:257] ha-274394-m03 status: &{Name:ha-274394-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:03:50.738917   40871 status.go:255] checking status of ha-274394-m04 ...
	I0429 00:03:50.739211   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:50.739258   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:50.754082   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0429 00:03:50.754532   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:50.754951   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:50.754977   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:50.755258   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:50.755457   40871 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:03:50.757004   40871 status.go:330] ha-274394-m04 host status = "Running" (err=<nil>)
	I0429 00:03:50.757032   40871 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:03:50.757302   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:50.757350   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:50.772173   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I0429 00:03:50.772632   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:50.773137   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:50.773163   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:50.773474   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:50.773666   40871 main.go:141] libmachine: (ha-274394-m04) Calling .GetIP
	I0429 00:03:50.776211   40871 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:03:50.776670   40871 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:03:50.776719   40871 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:03:50.776830   40871 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:03:50.777120   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:50.777156   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:50.792047   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0429 00:03:50.792534   40871 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:50.793111   40871 main.go:141] libmachine: Using API Version  1
	I0429 00:03:50.793140   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:50.793482   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:50.793727   40871 main.go:141] libmachine: (ha-274394-m04) Calling .DriverName
	I0429 00:03:50.793923   40871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:03:50.793942   40871 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHHostname
	I0429 00:03:50.796778   40871 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:03:50.797177   40871 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:03:50.797204   40871 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:03:50.797285   40871 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHPort
	I0429 00:03:50.797450   40871 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHKeyPath
	I0429 00:03:50.797624   40871 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHUsername
	I0429 00:03:50.797754   40871 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m04/id_rsa Username:docker}
	I0429 00:03:50.884394   40871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:03:50.904289   40871 status.go:257] ha-274394-m04 status: &{Name:ha-274394-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-274394 -n ha-274394
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-274394 logs -n 25: (1.574243163s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3174175435/001/cp-test_ha-274394-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394:/home/docker/cp-test_ha-274394-m03_ha-274394.txt                       |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394 sudo cat                                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m03_ha-274394.txt                                 |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m02:/home/docker/cp-test_ha-274394-m03_ha-274394-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m02 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m03_ha-274394-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04:/home/docker/cp-test_ha-274394-m03_ha-274394-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m04 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m03_ha-274394-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp testdata/cp-test.txt                                                | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3174175435/001/cp-test_ha-274394-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394:/home/docker/cp-test_ha-274394-m04_ha-274394.txt                       |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394 sudo cat                                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394.txt                                 |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m02:/home/docker/cp-test_ha-274394-m04_ha-274394-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m02 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03:/home/docker/cp-test_ha-274394-m04_ha-274394-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m03 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-274394 node stop m02 -v=7                                                     | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 23:56:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 23:56:44.603247   36356 out.go:291] Setting OutFile to fd 1 ...
	I0428 23:56:44.603339   36356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:56:44.603350   36356 out.go:304] Setting ErrFile to fd 2...
	I0428 23:56:44.603354   36356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:56:44.603524   36356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0428 23:56:44.604037   36356 out.go:298] Setting JSON to false
	I0428 23:56:44.604835   36356 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5949,"bootTime":1714342656,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0428 23:56:44.604886   36356 start.go:139] virtualization: kvm guest
	I0428 23:56:44.607006   36356 out.go:177] * [ha-274394] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0428 23:56:44.608416   36356 notify.go:220] Checking for updates...
	I0428 23:56:44.609889   36356 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 23:56:44.611307   36356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 23:56:44.612625   36356 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:56:44.613862   36356 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:56:44.615062   36356 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0428 23:56:44.616343   36356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 23:56:44.617967   36356 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 23:56:44.652686   36356 out.go:177] * Using the kvm2 driver based on user configuration
	I0428 23:56:44.653931   36356 start.go:297] selected driver: kvm2
	I0428 23:56:44.653943   36356 start.go:901] validating driver "kvm2" against <nil>
	I0428 23:56:44.653953   36356 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 23:56:44.654662   36356 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 23:56:44.654727   36356 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0428 23:56:44.669647   36356 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0428 23:56:44.669711   36356 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 23:56:44.669935   36356 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 23:56:44.669992   36356 cni.go:84] Creating CNI manager for ""
	I0428 23:56:44.670004   36356 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 23:56:44.670008   36356 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 23:56:44.670095   36356 start.go:340] cluster config:
	{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0428 23:56:44.670188   36356 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 23:56:44.672641   36356 out.go:177] * Starting "ha-274394" primary control-plane node in "ha-274394" cluster
	I0428 23:56:44.673990   36356 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:56:44.674079   36356 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0428 23:56:44.674091   36356 cache.go:56] Caching tarball of preloaded images
	I0428 23:56:44.674167   36356 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0428 23:56:44.674177   36356 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0428 23:56:44.674499   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:56:44.674522   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json: {Name:mka29a6cba1291c4c68f145dccef6ba110940a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:56:44.674652   36356 start.go:360] acquireMachinesLock for ha-274394: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 23:56:44.674679   36356 start.go:364] duration metric: took 14.805µs to acquireMachinesLock for "ha-274394"
	I0428 23:56:44.674692   36356 start.go:93] Provisioning new machine with config: &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:56:44.674751   36356 start.go:125] createHost starting for "" (driver="kvm2")
	I0428 23:56:44.676337   36356 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 23:56:44.676466   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:56:44.676503   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:56:44.690945   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37491
	I0428 23:56:44.691290   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:56:44.691875   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:56:44.691902   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:56:44.692184   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:56:44.692373   36356 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0428 23:56:44.692481   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:56:44.692639   36356 start.go:159] libmachine.API.Create for "ha-274394" (driver="kvm2")
	I0428 23:56:44.692672   36356 client.go:168] LocalClient.Create starting
	I0428 23:56:44.692707   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem
	I0428 23:56:44.692765   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:56:44.692791   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:56:44.692853   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem
	I0428 23:56:44.692882   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:56:44.692901   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:56:44.692925   36356 main.go:141] libmachine: Running pre-create checks...
	I0428 23:56:44.692937   36356 main.go:141] libmachine: (ha-274394) Calling .PreCreateCheck
	I0428 23:56:44.693213   36356 main.go:141] libmachine: (ha-274394) Calling .GetConfigRaw
	I0428 23:56:44.693560   36356 main.go:141] libmachine: Creating machine...
	I0428 23:56:44.693574   36356 main.go:141] libmachine: (ha-274394) Calling .Create
	I0428 23:56:44.693695   36356 main.go:141] libmachine: (ha-274394) Creating KVM machine...
	I0428 23:56:44.694900   36356 main.go:141] libmachine: (ha-274394) DBG | found existing default KVM network
	I0428 23:56:44.695582   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:44.695473   36379 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0428 23:56:44.695617   36356 main.go:141] libmachine: (ha-274394) DBG | created network xml: 
	I0428 23:56:44.695637   36356 main.go:141] libmachine: (ha-274394) DBG | <network>
	I0428 23:56:44.695651   36356 main.go:141] libmachine: (ha-274394) DBG |   <name>mk-ha-274394</name>
	I0428 23:56:44.695670   36356 main.go:141] libmachine: (ha-274394) DBG |   <dns enable='no'/>
	I0428 23:56:44.695685   36356 main.go:141] libmachine: (ha-274394) DBG |   
	I0428 23:56:44.695695   36356 main.go:141] libmachine: (ha-274394) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0428 23:56:44.695714   36356 main.go:141] libmachine: (ha-274394) DBG |     <dhcp>
	I0428 23:56:44.695730   36356 main.go:141] libmachine: (ha-274394) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0428 23:56:44.695740   36356 main.go:141] libmachine: (ha-274394) DBG |     </dhcp>
	I0428 23:56:44.695758   36356 main.go:141] libmachine: (ha-274394) DBG |   </ip>
	I0428 23:56:44.695763   36356 main.go:141] libmachine: (ha-274394) DBG |   
	I0428 23:56:44.695767   36356 main.go:141] libmachine: (ha-274394) DBG | </network>
	I0428 23:56:44.695774   36356 main.go:141] libmachine: (ha-274394) DBG | 
	I0428 23:56:44.700784   36356 main.go:141] libmachine: (ha-274394) DBG | trying to create private KVM network mk-ha-274394 192.168.39.0/24...
	I0428 23:56:44.765647   36356 main.go:141] libmachine: (ha-274394) DBG | private KVM network mk-ha-274394 192.168.39.0/24 created
	I0428 23:56:44.765680   36356 main.go:141] libmachine: (ha-274394) Setting up store path in /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394 ...
	I0428 23:56:44.765707   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:44.765600   36379 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:56:44.765726   36356 main.go:141] libmachine: (ha-274394) Building disk image from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0428 23:56:44.765810   36356 main.go:141] libmachine: (ha-274394) Downloading /home/jenkins/minikube-integration/17977-13393/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 23:56:44.991025   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:44.990901   36379 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa...
	I0428 23:56:45.061669   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:45.061561   36379 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/ha-274394.rawdisk...
	I0428 23:56:45.061712   36356 main.go:141] libmachine: (ha-274394) DBG | Writing magic tar header
	I0428 23:56:45.061726   36356 main.go:141] libmachine: (ha-274394) DBG | Writing SSH key tar header
	I0428 23:56:45.061742   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:45.061686   36379 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394 ...
	I0428 23:56:45.061871   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394
	I0428 23:56:45.061933   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394 (perms=drwx------)
	I0428 23:56:45.061961   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines
	I0428 23:56:45.061982   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:56:45.061998   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393
	I0428 23:56:45.062011   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines (perms=drwxr-xr-x)
	I0428 23:56:45.062043   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube (perms=drwxr-xr-x)
	I0428 23:56:45.062056   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393 (perms=drwxrwxr-x)
	I0428 23:56:45.062068   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0428 23:56:45.062082   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0428 23:56:45.062094   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins
	I0428 23:56:45.062107   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home
	I0428 23:56:45.062116   36356 main.go:141] libmachine: (ha-274394) DBG | Skipping /home - not owner
	I0428 23:56:45.062127   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0428 23:56:45.062136   36356 main.go:141] libmachine: (ha-274394) Creating domain...
	I0428 23:56:45.062965   36356 main.go:141] libmachine: (ha-274394) define libvirt domain using xml: 
	I0428 23:56:45.062990   36356 main.go:141] libmachine: (ha-274394) <domain type='kvm'>
	I0428 23:56:45.063000   36356 main.go:141] libmachine: (ha-274394)   <name>ha-274394</name>
	I0428 23:56:45.063009   36356 main.go:141] libmachine: (ha-274394)   <memory unit='MiB'>2200</memory>
	I0428 23:56:45.063020   36356 main.go:141] libmachine: (ha-274394)   <vcpu>2</vcpu>
	I0428 23:56:45.063029   36356 main.go:141] libmachine: (ha-274394)   <features>
	I0428 23:56:45.063041   36356 main.go:141] libmachine: (ha-274394)     <acpi/>
	I0428 23:56:45.063045   36356 main.go:141] libmachine: (ha-274394)     <apic/>
	I0428 23:56:45.063053   36356 main.go:141] libmachine: (ha-274394)     <pae/>
	I0428 23:56:45.063058   36356 main.go:141] libmachine: (ha-274394)     
	I0428 23:56:45.063066   36356 main.go:141] libmachine: (ha-274394)   </features>
	I0428 23:56:45.063071   36356 main.go:141] libmachine: (ha-274394)   <cpu mode='host-passthrough'>
	I0428 23:56:45.063078   36356 main.go:141] libmachine: (ha-274394)   
	I0428 23:56:45.063085   36356 main.go:141] libmachine: (ha-274394)   </cpu>
	I0428 23:56:45.063111   36356 main.go:141] libmachine: (ha-274394)   <os>
	I0428 23:56:45.063132   36356 main.go:141] libmachine: (ha-274394)     <type>hvm</type>
	I0428 23:56:45.063145   36356 main.go:141] libmachine: (ha-274394)     <boot dev='cdrom'/>
	I0428 23:56:45.063156   36356 main.go:141] libmachine: (ha-274394)     <boot dev='hd'/>
	I0428 23:56:45.063168   36356 main.go:141] libmachine: (ha-274394)     <bootmenu enable='no'/>
	I0428 23:56:45.063177   36356 main.go:141] libmachine: (ha-274394)   </os>
	I0428 23:56:45.063188   36356 main.go:141] libmachine: (ha-274394)   <devices>
	I0428 23:56:45.063199   36356 main.go:141] libmachine: (ha-274394)     <disk type='file' device='cdrom'>
	I0428 23:56:45.063232   36356 main.go:141] libmachine: (ha-274394)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/boot2docker.iso'/>
	I0428 23:56:45.063257   36356 main.go:141] libmachine: (ha-274394)       <target dev='hdc' bus='scsi'/>
	I0428 23:56:45.063269   36356 main.go:141] libmachine: (ha-274394)       <readonly/>
	I0428 23:56:45.063287   36356 main.go:141] libmachine: (ha-274394)     </disk>
	I0428 23:56:45.063297   36356 main.go:141] libmachine: (ha-274394)     <disk type='file' device='disk'>
	I0428 23:56:45.063303   36356 main.go:141] libmachine: (ha-274394)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0428 23:56:45.063311   36356 main.go:141] libmachine: (ha-274394)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/ha-274394.rawdisk'/>
	I0428 23:56:45.063319   36356 main.go:141] libmachine: (ha-274394)       <target dev='hda' bus='virtio'/>
	I0428 23:56:45.063323   36356 main.go:141] libmachine: (ha-274394)     </disk>
	I0428 23:56:45.063332   36356 main.go:141] libmachine: (ha-274394)     <interface type='network'>
	I0428 23:56:45.063338   36356 main.go:141] libmachine: (ha-274394)       <source network='mk-ha-274394'/>
	I0428 23:56:45.063344   36356 main.go:141] libmachine: (ha-274394)       <model type='virtio'/>
	I0428 23:56:45.063349   36356 main.go:141] libmachine: (ha-274394)     </interface>
	I0428 23:56:45.063359   36356 main.go:141] libmachine: (ha-274394)     <interface type='network'>
	I0428 23:56:45.063379   36356 main.go:141] libmachine: (ha-274394)       <source network='default'/>
	I0428 23:56:45.063391   36356 main.go:141] libmachine: (ha-274394)       <model type='virtio'/>
	I0428 23:56:45.063403   36356 main.go:141] libmachine: (ha-274394)     </interface>
	I0428 23:56:45.063417   36356 main.go:141] libmachine: (ha-274394)     <serial type='pty'>
	I0428 23:56:45.063428   36356 main.go:141] libmachine: (ha-274394)       <target port='0'/>
	I0428 23:56:45.063437   36356 main.go:141] libmachine: (ha-274394)     </serial>
	I0428 23:56:45.063445   36356 main.go:141] libmachine: (ha-274394)     <console type='pty'>
	I0428 23:56:45.063455   36356 main.go:141] libmachine: (ha-274394)       <target type='serial' port='0'/>
	I0428 23:56:45.063474   36356 main.go:141] libmachine: (ha-274394)     </console>
	I0428 23:56:45.063484   36356 main.go:141] libmachine: (ha-274394)     <rng model='virtio'>
	I0428 23:56:45.063512   36356 main.go:141] libmachine: (ha-274394)       <backend model='random'>/dev/random</backend>
	I0428 23:56:45.063538   36356 main.go:141] libmachine: (ha-274394)     </rng>
	I0428 23:56:45.063550   36356 main.go:141] libmachine: (ha-274394)     
	I0428 23:56:45.063572   36356 main.go:141] libmachine: (ha-274394)     
	I0428 23:56:45.063584   36356 main.go:141] libmachine: (ha-274394)   </devices>
	I0428 23:56:45.063593   36356 main.go:141] libmachine: (ha-274394) </domain>
	I0428 23:56:45.063601   36356 main.go:141] libmachine: (ha-274394) 
	I0428 23:56:45.067836   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a6:1d:f8 in network default
	I0428 23:56:45.068304   36356 main.go:141] libmachine: (ha-274394) Ensuring networks are active...
	I0428 23:56:45.068323   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:45.068905   36356 main.go:141] libmachine: (ha-274394) Ensuring network default is active
	I0428 23:56:45.069173   36356 main.go:141] libmachine: (ha-274394) Ensuring network mk-ha-274394 is active
	I0428 23:56:45.069648   36356 main.go:141] libmachine: (ha-274394) Getting domain xml...
	I0428 23:56:45.070358   36356 main.go:141] libmachine: (ha-274394) Creating domain...
	I0428 23:56:46.229124   36356 main.go:141] libmachine: (ha-274394) Waiting to get IP...
	I0428 23:56:46.229873   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:46.230293   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:46.230321   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:46.230266   36379 retry.go:31] will retry after 256.079887ms: waiting for machine to come up
	I0428 23:56:46.487746   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:46.488167   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:46.488190   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:46.488135   36379 retry.go:31] will retry after 259.573037ms: waiting for machine to come up
	I0428 23:56:46.749564   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:46.749940   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:46.749971   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:46.749894   36379 retry.go:31] will retry after 421.248911ms: waiting for machine to come up
	I0428 23:56:47.172578   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:47.173101   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:47.173132   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:47.173077   36379 retry.go:31] will retry after 446.554138ms: waiting for machine to come up
	I0428 23:56:47.621636   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:47.622039   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:47.622068   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:47.621985   36379 retry.go:31] will retry after 623.05137ms: waiting for machine to come up
	I0428 23:56:48.246898   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:48.247325   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:48.247347   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:48.247304   36379 retry.go:31] will retry after 674.412309ms: waiting for machine to come up
	I0428 23:56:48.922759   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:48.923073   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:48.923103   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:48.923031   36379 retry.go:31] will retry after 750.488538ms: waiting for machine to come up
	I0428 23:56:49.675196   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:49.675579   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:49.675614   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:49.675525   36379 retry.go:31] will retry after 1.274430052s: waiting for machine to come up
	I0428 23:56:50.951373   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:50.951753   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:50.951780   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:50.951712   36379 retry.go:31] will retry after 1.440496033s: waiting for machine to come up
	I0428 23:56:52.393417   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:52.393792   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:52.393814   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:52.393746   36379 retry.go:31] will retry after 2.10240003s: waiting for machine to come up
	I0428 23:56:54.497430   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:54.497829   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:54.497858   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:54.497777   36379 retry.go:31] will retry after 1.935763747s: waiting for machine to come up
	I0428 23:56:56.434877   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:56.435313   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:56.435343   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:56.435254   36379 retry.go:31] will retry after 2.246149526s: waiting for machine to come up
	I0428 23:56:58.684702   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:58.685119   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:58.685143   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:58.685091   36379 retry.go:31] will retry after 2.753267841s: waiting for machine to come up
	I0428 23:57:01.439496   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:01.439726   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:57:01.439748   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:57:01.439695   36379 retry.go:31] will retry after 4.35224695s: waiting for machine to come up
	I0428 23:57:05.794060   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:05.794442   36356 main.go:141] libmachine: (ha-274394) Found IP for machine: 192.168.39.237
	I0428 23:57:05.794467   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has current primary IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:05.794476   36356 main.go:141] libmachine: (ha-274394) Reserving static IP address...
	I0428 23:57:05.794825   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find host DHCP lease matching {name: "ha-274394", mac: "52:54:00:a1:02:06", ip: "192.168.39.237"} in network mk-ha-274394
	I0428 23:57:05.865762   36356 main.go:141] libmachine: (ha-274394) DBG | Getting to WaitForSSH function...
	I0428 23:57:05.865787   36356 main.go:141] libmachine: (ha-274394) Reserved static IP address: 192.168.39.237
	I0428 23:57:05.865796   36356 main.go:141] libmachine: (ha-274394) Waiting for SSH to be available...
	I0428 23:57:05.868238   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:05.868679   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:05.868710   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:05.868753   36356 main.go:141] libmachine: (ha-274394) DBG | Using SSH client type: external
	I0428 23:57:05.868785   36356 main.go:141] libmachine: (ha-274394) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa (-rw-------)
	I0428 23:57:05.868821   36356 main.go:141] libmachine: (ha-274394) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0428 23:57:05.868837   36356 main.go:141] libmachine: (ha-274394) DBG | About to run SSH command:
	I0428 23:57:05.868862   36356 main.go:141] libmachine: (ha-274394) DBG | exit 0
	I0428 23:57:05.995181   36356 main.go:141] libmachine: (ha-274394) DBG | SSH cmd err, output: <nil>: 
	I0428 23:57:05.995442   36356 main.go:141] libmachine: (ha-274394) KVM machine creation complete!
	I0428 23:57:05.995758   36356 main.go:141] libmachine: (ha-274394) Calling .GetConfigRaw
	I0428 23:57:05.996317   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:05.996503   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:05.996642   36356 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0428 23:57:05.996686   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:57:05.998130   36356 main.go:141] libmachine: Detecting operating system of created instance...
	I0428 23:57:05.998147   36356 main.go:141] libmachine: Waiting for SSH to be available...
	I0428 23:57:05.998155   36356 main.go:141] libmachine: Getting to WaitForSSH function...
	I0428 23:57:05.998161   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.000506   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.000786   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.000807   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.001000   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.001156   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.001304   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.001400   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.001519   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:06.001696   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:06.001705   36356 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0428 23:57:06.105745   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:57:06.105775   36356 main.go:141] libmachine: Detecting the provisioner...
	I0428 23:57:06.105785   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.108180   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.108463   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.108483   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.108623   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.108837   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.108991   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.109095   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.109257   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:06.109433   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:06.109445   36356 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0428 23:57:06.219616   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0428 23:57:06.219678   36356 main.go:141] libmachine: found compatible host: buildroot
	I0428 23:57:06.219688   36356 main.go:141] libmachine: Provisioning with buildroot...
	I0428 23:57:06.219711   36356 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0428 23:57:06.219970   36356 buildroot.go:166] provisioning hostname "ha-274394"
	I0428 23:57:06.219993   36356 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0428 23:57:06.220152   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.222516   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.222928   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.222956   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.223088   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.223273   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.223400   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.223530   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.223732   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:06.223884   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:06.223895   36356 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-274394 && echo "ha-274394" | sudo tee /etc/hostname
	I0428 23:57:06.346120   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-274394
	
	I0428 23:57:06.346147   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.348916   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.349259   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.349290   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.349437   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.349605   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.349770   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.349918   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.350107   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:06.350259   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:06.350280   36356 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-274394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-274394/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-274394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 23:57:06.464092   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:57:06.464118   36356 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0428 23:57:06.464151   36356 buildroot.go:174] setting up certificates
	I0428 23:57:06.464163   36356 provision.go:84] configureAuth start
	I0428 23:57:06.464185   36356 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0428 23:57:06.464469   36356 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0428 23:57:06.467030   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.467355   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.467387   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.467540   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.470563   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.470888   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.470907   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.471082   36356 provision.go:143] copyHostCerts
	I0428 23:57:06.471126   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:57:06.471183   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0428 23:57:06.471207   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:57:06.471291   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0428 23:57:06.471386   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:57:06.471410   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0428 23:57:06.471420   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:57:06.471456   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0428 23:57:06.471517   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:57:06.471540   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0428 23:57:06.471549   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:57:06.471584   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0428 23:57:06.471645   36356 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.ha-274394 san=[127.0.0.1 192.168.39.237 ha-274394 localhost minikube]
	I0428 23:57:06.573643   36356 provision.go:177] copyRemoteCerts
	I0428 23:57:06.573696   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 23:57:06.573720   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.576152   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.576514   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.576544   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.576665   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.576843   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.577001   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.577123   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:06.663863   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0428 23:57:06.663955   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 23:57:06.694572   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0428 23:57:06.694632   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 23:57:06.722982   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0428 23:57:06.723037   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0428 23:57:06.751340   36356 provision.go:87] duration metric: took 287.163137ms to configureAuth
	I0428 23:57:06.751365   36356 buildroot.go:189] setting minikube options for container-runtime
	I0428 23:57:06.751508   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:57:06.751564   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.753881   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.754233   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.754262   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.754433   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.754591   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.754749   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.754852   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.754990   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:06.755149   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:06.755166   36356 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0428 23:57:07.035413   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0428 23:57:07.035450   36356 main.go:141] libmachine: Checking connection to Docker...
	I0428 23:57:07.035475   36356 main.go:141] libmachine: (ha-274394) Calling .GetURL
	I0428 23:57:07.036800   36356 main.go:141] libmachine: (ha-274394) DBG | Using libvirt version 6000000
	I0428 23:57:07.038840   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.039121   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.039148   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.039301   36356 main.go:141] libmachine: Docker is up and running!
	I0428 23:57:07.039313   36356 main.go:141] libmachine: Reticulating splines...
	I0428 23:57:07.039321   36356 client.go:171] duration metric: took 22.346638475s to LocalClient.Create
	I0428 23:57:07.039346   36356 start.go:167] duration metric: took 22.346709049s to libmachine.API.Create "ha-274394"
	I0428 23:57:07.039358   36356 start.go:293] postStartSetup for "ha-274394" (driver="kvm2")
	I0428 23:57:07.039372   36356 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 23:57:07.039392   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:07.039621   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 23:57:07.039654   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:07.041418   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.041695   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.041721   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.041838   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:07.042035   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:07.042193   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:07.042358   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:07.126553   36356 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 23:57:07.131394   36356 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 23:57:07.131418   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0428 23:57:07.131489   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0428 23:57:07.131582   36356 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0428 23:57:07.131595   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /etc/ssl/certs/207272.pem
	I0428 23:57:07.131731   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 23:57:07.143264   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:57:07.169021   36356 start.go:296] duration metric: took 129.64708ms for postStartSetup
	I0428 23:57:07.169063   36356 main.go:141] libmachine: (ha-274394) Calling .GetConfigRaw
	I0428 23:57:07.169591   36356 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0428 23:57:07.172044   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.172385   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.172414   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.172640   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:57:07.172823   36356 start.go:128] duration metric: took 22.49806301s to createHost
	I0428 23:57:07.172850   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:07.174730   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.175018   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.175039   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.175144   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:07.175354   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:07.175490   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:07.175622   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:07.175762   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:07.175912   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:07.175931   36356 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 23:57:07.283490   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714348627.250495573
	
	I0428 23:57:07.283513   36356 fix.go:216] guest clock: 1714348627.250495573
	I0428 23:57:07.283522   36356 fix.go:229] Guest: 2024-04-28 23:57:07.250495573 +0000 UTC Remote: 2024-04-28 23:57:07.172835932 +0000 UTC m=+22.618724383 (delta=77.659641ms)
	I0428 23:57:07.283564   36356 fix.go:200] guest clock delta is within tolerance: 77.659641ms
	I0428 23:57:07.283580   36356 start.go:83] releasing machines lock for "ha-274394", held for 22.608884768s
	I0428 23:57:07.283601   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:07.283905   36356 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0428 23:57:07.286602   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.286951   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.286996   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.287152   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:07.287670   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:07.287826   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:07.287894   36356 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 23:57:07.287938   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:07.288038   36356 ssh_runner.go:195] Run: cat /version.json
	I0428 23:57:07.288059   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:07.290428   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.290560   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.290791   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.290816   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.290941   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:07.290945   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.291032   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.291107   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:07.291130   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:07.291298   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:07.291309   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:07.291469   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:07.291526   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:07.291680   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:07.367832   36356 ssh_runner.go:195] Run: systemctl --version
	I0428 23:57:07.389909   36356 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0428 23:57:07.553715   36356 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 23:57:07.560656   36356 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 23:57:07.560728   36356 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 23:57:07.580243   36356 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 23:57:07.580269   36356 start.go:494] detecting cgroup driver to use...
	I0428 23:57:07.580352   36356 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 23:57:07.597855   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 23:57:07.614447   36356 docker.go:217] disabling cri-docker service (if available) ...
	I0428 23:57:07.614507   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0428 23:57:07.630137   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0428 23:57:07.646659   36356 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0428 23:57:07.770067   36356 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0428 23:57:07.939717   36356 docker.go:233] disabling docker service ...
	I0428 23:57:07.939790   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0428 23:57:07.956532   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0428 23:57:07.970431   36356 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0428 23:57:08.090398   36356 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0428 23:57:08.208133   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0428 23:57:08.222537   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 23:57:08.243672   36356 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0428 23:57:08.243746   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.254760   36356 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0428 23:57:08.254827   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.265802   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.276580   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.287357   36356 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 23:57:08.298925   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.310180   36356 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.329330   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.340865   36356 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 23:57:08.350992   36356 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0428 23:57:08.351099   36356 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0428 23:57:08.365295   36356 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 23:57:08.375416   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:57:08.489130   36356 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0428 23:57:08.627175   36356 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0428 23:57:08.627252   36356 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0428 23:57:08.632940   36356 start.go:562] Will wait 60s for crictl version
	I0428 23:57:08.633035   36356 ssh_runner.go:195] Run: which crictl
	I0428 23:57:08.637726   36356 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 23:57:08.688300   36356 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0428 23:57:08.688414   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:57:08.718921   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:57:08.752722   36356 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0428 23:57:08.754156   36356 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0428 23:57:08.756290   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:08.756627   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:08.756654   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:08.756870   36356 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0428 23:57:08.761473   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:57:08.776539   36356 kubeadm.go:877] updating cluster {Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 23:57:08.776707   36356 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:57:08.776777   36356 ssh_runner.go:195] Run: sudo crictl images --output json
	I0428 23:57:08.813688   36356 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0428 23:57:08.813765   36356 ssh_runner.go:195] Run: which lz4
	I0428 23:57:08.818304   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 23:57:08.818436   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 23:57:08.823568   36356 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 23:57:08.823596   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0428 23:57:10.542324   36356 crio.go:462] duration metric: took 1.723924834s to copy over tarball
	I0428 23:57:10.542410   36356 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 23:57:13.112112   36356 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.569667818s)
	I0428 23:57:13.112142   36356 crio.go:469] duration metric: took 2.569786929s to extract the tarball
	I0428 23:57:13.112149   36356 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 23:57:13.152213   36356 ssh_runner.go:195] Run: sudo crictl images --output json
	I0428 23:57:13.202989   36356 crio.go:514] all images are preloaded for cri-o runtime.
	I0428 23:57:13.203014   36356 cache_images.go:84] Images are preloaded, skipping loading
	I0428 23:57:13.203023   36356 kubeadm.go:928] updating node { 192.168.39.237 8443 v1.30.0 crio true true} ...
	I0428 23:57:13.203155   36356 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-274394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 23:57:13.203239   36356 ssh_runner.go:195] Run: crio config
	I0428 23:57:13.256369   36356 cni.go:84] Creating CNI manager for ""
	I0428 23:57:13.256390   36356 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 23:57:13.256398   36356 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 23:57:13.256417   36356 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-274394 NodeName:ha-274394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 23:57:13.256553   36356 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-274394"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 23:57:13.256576   36356 kube-vip.go:111] generating kube-vip config ...
	I0428 23:57:13.256610   36356 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 23:57:13.276673   36356 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 23:57:13.276754   36356 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 23:57:13.276809   36356 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 23:57:13.289710   36356 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 23:57:13.289767   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 23:57:13.302563   36356 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 23:57:13.323639   36356 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 23:57:13.343843   36356 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0428 23:57:13.363730   36356 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 23:57:13.384050   36356 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0428 23:57:13.388734   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:57:13.404951   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:57:13.547364   36356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:57:13.573283   36356 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394 for IP: 192.168.39.237
	I0428 23:57:13.573311   36356 certs.go:194] generating shared ca certs ...
	I0428 23:57:13.573326   36356 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:13.573483   36356 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0428 23:57:13.573525   36356 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0428 23:57:13.573535   36356 certs.go:256] generating profile certs ...
	I0428 23:57:13.573586   36356 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key
	I0428 23:57:13.573615   36356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt with IP's: []
	I0428 23:57:13.648288   36356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt ...
	I0428 23:57:13.648320   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt: {Name:mk32ae7dfd9f9a702d9db8b5322b2bf08a48e9fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:13.648491   36356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key ...
	I0428 23:57:13.648503   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key: {Name:mk3088da440752b13c33384f2e40d936a105f5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:13.648587   36356 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.c2322582
	I0428 23:57:13.648604   36356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.c2322582 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237 192.168.39.254]
	I0428 23:57:13.811379   36356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.c2322582 ...
	I0428 23:57:13.811407   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.c2322582: {Name:mkec37f4828f6d0d617a8817ad0cb65319dfc837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:13.811571   36356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.c2322582 ...
	I0428 23:57:13.811589   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.c2322582: {Name:mk3bb5ee50351cf2b6f1de8651fd8346e52caf40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:13.811694   36356 certs.go:381] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.c2322582 -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt
	I0428 23:57:13.811784   36356 certs.go:385] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.c2322582 -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key
	I0428 23:57:13.811836   36356 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key
	I0428 23:57:13.811856   36356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt with IP's: []
	I0428 23:57:14.027352   36356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt ...
	I0428 23:57:14.027379   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt: {Name:mkf2b62fd6e6eae93da857bc5cdce5be75eb4616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:14.027538   36356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key ...
	I0428 23:57:14.027550   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key: {Name:mkca8eb4eb8045104b93c56b349092a4368aa735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:14.027645   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 23:57:14.027664   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0428 23:57:14.027680   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 23:57:14.027695   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 23:57:14.027706   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 23:57:14.027720   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 23:57:14.027732   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 23:57:14.027743   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 23:57:14.027789   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0428 23:57:14.027820   36356 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0428 23:57:14.027829   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0428 23:57:14.027856   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0428 23:57:14.027877   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0428 23:57:14.027897   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0428 23:57:14.027934   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:57:14.027960   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:57:14.027974   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem -> /usr/share/ca-certificates/20727.pem
	I0428 23:57:14.027986   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /usr/share/ca-certificates/207272.pem
	I0428 23:57:14.028516   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 23:57:14.056477   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0428 23:57:14.083299   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 23:57:14.110784   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 23:57:14.138763   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 23:57:14.165666   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 23:57:14.193648   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 23:57:14.223287   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0428 23:57:14.253425   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 23:57:14.284140   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0428 23:57:14.314319   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0428 23:57:14.343119   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 23:57:14.364726   36356 ssh_runner.go:195] Run: openssl version
	I0428 23:57:14.375141   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 23:57:14.395577   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:57:14.401302   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:57:14.401384   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:57:14.410279   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 23:57:14.425476   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0428 23:57:14.438080   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0428 23:57:14.442930   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0428 23:57:14.442975   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0428 23:57:14.449030   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0428 23:57:14.461177   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0428 23:57:14.473485   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0428 23:57:14.478398   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0428 23:57:14.478444   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0428 23:57:14.484622   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 23:57:14.497144   36356 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 23:57:14.501613   36356 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 23:57:14.501664   36356 kubeadm.go:391] StartCluster: {Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:57:14.501755   36356 cri.go:56] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0428 23:57:14.501787   36356 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0428 23:57:14.548339   36356 cri.go:91] found id: ""
	I0428 23:57:14.548426   36356 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 23:57:14.560473   36356 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 23:57:14.572795   36356 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 23:57:14.584608   36356 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 23:57:14.584629   36356 kubeadm.go:156] found existing configuration files:
	
	I0428 23:57:14.584677   36356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 23:57:14.595801   36356 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 23:57:14.595872   36356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 23:57:14.608553   36356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 23:57:14.805450   36356 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 23:57:14.805508   36356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 23:57:14.816940   36356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 23:57:14.827476   36356 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 23:57:14.827533   36356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 23:57:14.838818   36356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 23:57:14.849316   36356 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 23:57:14.849374   36356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 23:57:14.860533   36356 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 23:57:14.970911   36356 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 23:57:14.971092   36356 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 23:57:15.140642   36356 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 23:57:15.140791   36356 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 23:57:15.140938   36356 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 23:57:15.402735   36356 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 23:57:15.581497   36356 out.go:204]   - Generating certificates and keys ...
	I0428 23:57:15.581620   36356 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 23:57:15.581696   36356 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 23:57:15.581791   36356 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 23:57:15.742471   36356 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 23:57:15.880482   36356 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 23:57:16.079408   36356 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 23:57:16.265709   36356 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 23:57:16.265929   36356 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-274394 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0428 23:57:16.377253   36356 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 23:57:16.377392   36356 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-274394 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0428 23:57:16.568167   36356 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 23:57:16.755727   36356 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 23:57:17.068472   36356 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 23:57:17.068836   36356 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 23:57:17.224359   36356 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 23:57:17.587671   36356 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 23:57:17.762573   36356 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 23:57:17.944221   36356 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 23:57:18.245238   36356 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 23:57:18.245785   36356 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 23:57:18.249440   36356 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 23:57:18.251518   36356 out.go:204]   - Booting up control plane ...
	I0428 23:57:18.251619   36356 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 23:57:18.251797   36356 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 23:57:18.252676   36356 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 23:57:18.269919   36356 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 23:57:18.270872   36356 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 23:57:18.270967   36356 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 23:57:18.409785   36356 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 23:57:18.409916   36356 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 23:57:19.408097   36356 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001619721s
	I0428 23:57:19.408195   36356 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 23:57:25.389394   36356 kubeadm.go:309] [api-check] The API server is healthy after 5.984864862s
	I0428 23:57:25.404982   36356 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 23:57:25.422266   36356 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 23:57:25.456777   36356 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 23:57:25.456979   36356 kubeadm.go:309] [mark-control-plane] Marking the node ha-274394 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 23:57:25.476230   36356 kubeadm.go:309] [bootstrap-token] Using token: p7cwcq.w3fzbiomge83y6x5
	I0428 23:57:25.477875   36356 out.go:204]   - Configuring RBAC rules ...
	I0428 23:57:25.478055   36356 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 23:57:25.485371   36356 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 23:57:25.497525   36356 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 23:57:25.502880   36356 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 23:57:25.507998   36356 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 23:57:25.515623   36356 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 23:57:25.797907   36356 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 23:57:26.231776   36356 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 23:57:26.796535   36356 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 23:57:26.797290   36356 kubeadm.go:309] 
	I0428 23:57:26.797374   36356 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 23:57:26.797385   36356 kubeadm.go:309] 
	I0428 23:57:26.797496   36356 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 23:57:26.797518   36356 kubeadm.go:309] 
	I0428 23:57:26.797583   36356 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 23:57:26.797662   36356 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 23:57:26.797737   36356 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 23:57:26.797770   36356 kubeadm.go:309] 
	I0428 23:57:26.797838   36356 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 23:57:26.797857   36356 kubeadm.go:309] 
	I0428 23:57:26.797939   36356 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 23:57:26.797950   36356 kubeadm.go:309] 
	I0428 23:57:26.798054   36356 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 23:57:26.798154   36356 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 23:57:26.798260   36356 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 23:57:26.798268   36356 kubeadm.go:309] 
	I0428 23:57:26.798355   36356 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 23:57:26.798421   36356 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 23:57:26.798427   36356 kubeadm.go:309] 
	I0428 23:57:26.798493   36356 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p7cwcq.w3fzbiomge83y6x5 \
	I0428 23:57:26.798582   36356 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 \
	I0428 23:57:26.798602   36356 kubeadm.go:309] 	--control-plane 
	I0428 23:57:26.798608   36356 kubeadm.go:309] 
	I0428 23:57:26.798682   36356 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 23:57:26.798693   36356 kubeadm.go:309] 
	I0428 23:57:26.798806   36356 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p7cwcq.w3fzbiomge83y6x5 \
	I0428 23:57:26.798954   36356 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 
	I0428 23:57:26.799378   36356 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 23:57:26.799459   36356 cni.go:84] Creating CNI manager for ""
	I0428 23:57:26.799472   36356 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 23:57:26.801505   36356 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 23:57:26.802707   36356 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 23:57:26.808925   36356 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 23:57:26.808943   36356 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 23:57:26.834378   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 23:57:27.201663   36356 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 23:57:27.201739   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:27.201741   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-274394 minikube.k8s.io/updated_at=2024_04_28T23_57_27_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-274394 minikube.k8s.io/primary=true
	I0428 23:57:27.218086   36356 ops.go:34] apiserver oom_adj: -16
	I0428 23:57:27.424076   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:27.925075   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:28.424356   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:28.924339   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:29.424513   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:29.924288   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:30.425076   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:30.924258   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:31.424755   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:31.924830   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:32.424739   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:32.924320   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:33.425053   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:33.924560   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:34.424815   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:34.924173   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:35.424207   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:35.924496   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:36.425172   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:36.924859   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:37.424869   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:37.924250   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:38.424267   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:38.924424   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:39.082874   36356 kubeadm.go:1107] duration metric: took 11.881195164s to wait for elevateKubeSystemPrivileges
	W0428 23:57:39.082918   36356 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 23:57:39.082928   36356 kubeadm.go:393] duration metric: took 24.581266215s to StartCluster
	I0428 23:57:39.082947   36356 settings.go:142] acquiring lock: {Name:mk4e6965347be51f4cd501030baea6b9cd2dbc9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:39.083032   36356 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:57:39.083795   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/kubeconfig: {Name:mk5412a370a0ddec304ff7697d6d137221e96742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:39.083984   36356 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:57:39.084009   36356 start.go:240] waiting for startup goroutines ...
	I0428 23:57:39.083994   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 23:57:39.084007   36356 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 23:57:39.084098   36356 addons.go:69] Setting storage-provisioner=true in profile "ha-274394"
	I0428 23:57:39.084105   36356 addons.go:69] Setting default-storageclass=true in profile "ha-274394"
	I0428 23:57:39.084136   36356 addons.go:234] Setting addon storage-provisioner=true in "ha-274394"
	I0428 23:57:39.084173   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:57:39.084193   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:57:39.084174   36356 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-274394"
	I0428 23:57:39.084599   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:39.084626   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:39.084651   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:39.084660   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:39.099465   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40107
	I0428 23:57:39.099483   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0428 23:57:39.099923   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:39.099924   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:39.100433   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:39.100461   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:39.100515   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:39.100531   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:39.100847   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:39.100870   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:39.101048   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:57:39.101480   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:39.101623   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:39.103124   36356 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:57:39.103379   36356 kapi.go:59] client config for ha-274394: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt", KeyFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key", CAFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 23:57:39.103792   36356 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 23:57:39.103956   36356 addons.go:234] Setting addon default-storageclass=true in "ha-274394"
	I0428 23:57:39.103995   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:57:39.104250   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:39.104294   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:39.117517   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I0428 23:57:39.117990   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:39.118551   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:39.118583   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:39.118929   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:39.119159   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:57:39.119482   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0428 23:57:39.119880   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:39.120375   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:39.120400   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:39.120739   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:39.120952   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:39.122980   36356 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 23:57:39.121312   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:39.124387   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:39.124487   36356 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 23:57:39.124512   36356 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 23:57:39.124529   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:39.127583   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:39.128028   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:39.128055   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:39.128209   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:39.128392   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:39.128545   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:39.128689   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:39.140041   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43663
	I0428 23:57:39.140489   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:39.140944   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:39.140962   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:39.141257   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:39.141435   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:57:39.143020   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:39.143276   36356 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 23:57:39.143289   36356 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 23:57:39.143301   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:39.146287   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:39.146694   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:39.146732   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:39.146972   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:39.147160   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:39.147331   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:39.147464   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:39.275667   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 23:57:39.307241   36356 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 23:57:39.452854   36356 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 23:57:39.886409   36356 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0428 23:57:40.144028   36356 main.go:141] libmachine: Making call to close driver server
	I0428 23:57:40.144057   36356 main.go:141] libmachine: (ha-274394) Calling .Close
	I0428 23:57:40.144093   36356 main.go:141] libmachine: Making call to close driver server
	I0428 23:57:40.144128   36356 main.go:141] libmachine: (ha-274394) Calling .Close
	I0428 23:57:40.144337   36356 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:57:40.144352   36356 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:57:40.144360   36356 main.go:141] libmachine: Making call to close driver server
	I0428 23:57:40.144366   36356 main.go:141] libmachine: (ha-274394) Calling .Close
	I0428 23:57:40.144465   36356 main.go:141] libmachine: (ha-274394) DBG | Closing plugin on server side
	I0428 23:57:40.144477   36356 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:57:40.144503   36356 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:57:40.144521   36356 main.go:141] libmachine: Making call to close driver server
	I0428 23:57:40.144529   36356 main.go:141] libmachine: (ha-274394) Calling .Close
	I0428 23:57:40.144593   36356 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:57:40.144632   36356 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:57:40.144616   36356 main.go:141] libmachine: (ha-274394) DBG | Closing plugin on server side
	I0428 23:57:40.144722   36356 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:57:40.144737   36356 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:57:40.144861   36356 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 23:57:40.144875   36356 round_trippers.go:469] Request Headers:
	I0428 23:57:40.144885   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:57:40.144889   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:57:40.159830   36356 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0428 23:57:40.160397   36356 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 23:57:40.160413   36356 round_trippers.go:469] Request Headers:
	I0428 23:57:40.160420   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:57:40.160426   36356 round_trippers.go:473]     Content-Type: application/json
	I0428 23:57:40.160431   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:57:40.164030   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:57:40.164307   36356 main.go:141] libmachine: Making call to close driver server
	I0428 23:57:40.164329   36356 main.go:141] libmachine: (ha-274394) Calling .Close
	I0428 23:57:40.164622   36356 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:57:40.164645   36356 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:57:40.166553   36356 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 23:57:40.168054   36356 addons.go:505] duration metric: took 1.084044252s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 23:57:40.168092   36356 start.go:245] waiting for cluster config update ...
	I0428 23:57:40.168102   36356 start.go:254] writing updated cluster config ...
	I0428 23:57:40.170126   36356 out.go:177] 
	I0428 23:57:40.171839   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:57:40.171906   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:57:40.173902   36356 out.go:177] * Starting "ha-274394-m02" control-plane node in "ha-274394" cluster
	I0428 23:57:40.175173   36356 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:57:40.175207   36356 cache.go:56] Caching tarball of preloaded images
	I0428 23:57:40.175324   36356 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0428 23:57:40.175343   36356 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0428 23:57:40.175453   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:57:40.175692   36356 start.go:360] acquireMachinesLock for ha-274394-m02: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 23:57:40.175759   36356 start.go:364] duration metric: took 36.777µs to acquireMachinesLock for "ha-274394-m02"
	I0428 23:57:40.175784   36356 start.go:93] Provisioning new machine with config: &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:57:40.175893   36356 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0428 23:57:40.177596   36356 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 23:57:40.177686   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:40.177716   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:40.192465   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
	I0428 23:57:40.192869   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:40.193339   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:40.193360   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:40.193660   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:40.193851   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetMachineName
	I0428 23:57:40.194005   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:57:40.194206   36356 start.go:159] libmachine.API.Create for "ha-274394" (driver="kvm2")
	I0428 23:57:40.194237   36356 client.go:168] LocalClient.Create starting
	I0428 23:57:40.194281   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem
	I0428 23:57:40.194325   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:57:40.194343   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:57:40.194395   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem
	I0428 23:57:40.194413   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:57:40.194423   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:57:40.194437   36356 main.go:141] libmachine: Running pre-create checks...
	I0428 23:57:40.194445   36356 main.go:141] libmachine: (ha-274394-m02) Calling .PreCreateCheck
	I0428 23:57:40.194610   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetConfigRaw
	I0428 23:57:40.194952   36356 main.go:141] libmachine: Creating machine...
	I0428 23:57:40.194964   36356 main.go:141] libmachine: (ha-274394-m02) Calling .Create
	I0428 23:57:40.195082   36356 main.go:141] libmachine: (ha-274394-m02) Creating KVM machine...
	I0428 23:57:40.196448   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found existing default KVM network
	I0428 23:57:40.196567   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found existing private KVM network mk-ha-274394
	I0428 23:57:40.196728   36356 main.go:141] libmachine: (ha-274394-m02) Setting up store path in /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02 ...
	I0428 23:57:40.196753   36356 main.go:141] libmachine: (ha-274394-m02) Building disk image from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0428 23:57:40.196818   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:40.196708   36756 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:57:40.196940   36356 main.go:141] libmachine: (ha-274394-m02) Downloading /home/jenkins/minikube-integration/17977-13393/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 23:57:40.430082   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:40.429932   36756 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa...
	I0428 23:57:40.583373   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:40.583223   36756 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/ha-274394-m02.rawdisk...
	I0428 23:57:40.583418   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Writing magic tar header
	I0428 23:57:40.583434   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Writing SSH key tar header
	I0428 23:57:40.583447   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:40.583383   36756 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02 ...
	I0428 23:57:40.583532   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02
	I0428 23:57:40.583555   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02 (perms=drwx------)
	I0428 23:57:40.583568   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines
	I0428 23:57:40.583584   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines (perms=drwxr-xr-x)
	I0428 23:57:40.583603   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:57:40.583619   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393
	I0428 23:57:40.583646   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube (perms=drwxr-xr-x)
	I0428 23:57:40.583660   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393 (perms=drwxrwxr-x)
	I0428 23:57:40.583673   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0428 23:57:40.583685   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins
	I0428 23:57:40.583696   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0428 23:57:40.583708   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0428 23:57:40.583716   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home
	I0428 23:57:40.583733   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Skipping /home - not owner
	I0428 23:57:40.583747   36356 main.go:141] libmachine: (ha-274394-m02) Creating domain...
	I0428 23:57:40.584643   36356 main.go:141] libmachine: (ha-274394-m02) define libvirt domain using xml: 
	I0428 23:57:40.584666   36356 main.go:141] libmachine: (ha-274394-m02) <domain type='kvm'>
	I0428 23:57:40.584679   36356 main.go:141] libmachine: (ha-274394-m02)   <name>ha-274394-m02</name>
	I0428 23:57:40.584692   36356 main.go:141] libmachine: (ha-274394-m02)   <memory unit='MiB'>2200</memory>
	I0428 23:57:40.584703   36356 main.go:141] libmachine: (ha-274394-m02)   <vcpu>2</vcpu>
	I0428 23:57:40.584718   36356 main.go:141] libmachine: (ha-274394-m02)   <features>
	I0428 23:57:40.584731   36356 main.go:141] libmachine: (ha-274394-m02)     <acpi/>
	I0428 23:57:40.584741   36356 main.go:141] libmachine: (ha-274394-m02)     <apic/>
	I0428 23:57:40.584750   36356 main.go:141] libmachine: (ha-274394-m02)     <pae/>
	I0428 23:57:40.584760   36356 main.go:141] libmachine: (ha-274394-m02)     
	I0428 23:57:40.584777   36356 main.go:141] libmachine: (ha-274394-m02)   </features>
	I0428 23:57:40.584788   36356 main.go:141] libmachine: (ha-274394-m02)   <cpu mode='host-passthrough'>
	I0428 23:57:40.584800   36356 main.go:141] libmachine: (ha-274394-m02)   
	I0428 23:57:40.584812   36356 main.go:141] libmachine: (ha-274394-m02)   </cpu>
	I0428 23:57:40.584825   36356 main.go:141] libmachine: (ha-274394-m02)   <os>
	I0428 23:57:40.584837   36356 main.go:141] libmachine: (ha-274394-m02)     <type>hvm</type>
	I0428 23:57:40.584848   36356 main.go:141] libmachine: (ha-274394-m02)     <boot dev='cdrom'/>
	I0428 23:57:40.584858   36356 main.go:141] libmachine: (ha-274394-m02)     <boot dev='hd'/>
	I0428 23:57:40.584869   36356 main.go:141] libmachine: (ha-274394-m02)     <bootmenu enable='no'/>
	I0428 23:57:40.584881   36356 main.go:141] libmachine: (ha-274394-m02)   </os>
	I0428 23:57:40.584889   36356 main.go:141] libmachine: (ha-274394-m02)   <devices>
	I0428 23:57:40.584903   36356 main.go:141] libmachine: (ha-274394-m02)     <disk type='file' device='cdrom'>
	I0428 23:57:40.584923   36356 main.go:141] libmachine: (ha-274394-m02)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/boot2docker.iso'/>
	I0428 23:57:40.584937   36356 main.go:141] libmachine: (ha-274394-m02)       <target dev='hdc' bus='scsi'/>
	I0428 23:57:40.584947   36356 main.go:141] libmachine: (ha-274394-m02)       <readonly/>
	I0428 23:57:40.584955   36356 main.go:141] libmachine: (ha-274394-m02)     </disk>
	I0428 23:57:40.584965   36356 main.go:141] libmachine: (ha-274394-m02)     <disk type='file' device='disk'>
	I0428 23:57:40.584977   36356 main.go:141] libmachine: (ha-274394-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0428 23:57:40.584989   36356 main.go:141] libmachine: (ha-274394-m02)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/ha-274394-m02.rawdisk'/>
	I0428 23:57:40.585019   36356 main.go:141] libmachine: (ha-274394-m02)       <target dev='hda' bus='virtio'/>
	I0428 23:57:40.585048   36356 main.go:141] libmachine: (ha-274394-m02)     </disk>
	I0428 23:57:40.585062   36356 main.go:141] libmachine: (ha-274394-m02)     <interface type='network'>
	I0428 23:57:40.585078   36356 main.go:141] libmachine: (ha-274394-m02)       <source network='mk-ha-274394'/>
	I0428 23:57:40.585090   36356 main.go:141] libmachine: (ha-274394-m02)       <model type='virtio'/>
	I0428 23:57:40.585100   36356 main.go:141] libmachine: (ha-274394-m02)     </interface>
	I0428 23:57:40.585112   36356 main.go:141] libmachine: (ha-274394-m02)     <interface type='network'>
	I0428 23:57:40.585122   36356 main.go:141] libmachine: (ha-274394-m02)       <source network='default'/>
	I0428 23:57:40.585133   36356 main.go:141] libmachine: (ha-274394-m02)       <model type='virtio'/>
	I0428 23:57:40.585143   36356 main.go:141] libmachine: (ha-274394-m02)     </interface>
	I0428 23:57:40.585193   36356 main.go:141] libmachine: (ha-274394-m02)     <serial type='pty'>
	I0428 23:57:40.585213   36356 main.go:141] libmachine: (ha-274394-m02)       <target port='0'/>
	I0428 23:57:40.585230   36356 main.go:141] libmachine: (ha-274394-m02)     </serial>
	I0428 23:57:40.585246   36356 main.go:141] libmachine: (ha-274394-m02)     <console type='pty'>
	I0428 23:57:40.585275   36356 main.go:141] libmachine: (ha-274394-m02)       <target type='serial' port='0'/>
	I0428 23:57:40.585293   36356 main.go:141] libmachine: (ha-274394-m02)     </console>
	I0428 23:57:40.585302   36356 main.go:141] libmachine: (ha-274394-m02)     <rng model='virtio'>
	I0428 23:57:40.585308   36356 main.go:141] libmachine: (ha-274394-m02)       <backend model='random'>/dev/random</backend>
	I0428 23:57:40.585314   36356 main.go:141] libmachine: (ha-274394-m02)     </rng>
	I0428 23:57:40.585321   36356 main.go:141] libmachine: (ha-274394-m02)     
	I0428 23:57:40.585326   36356 main.go:141] libmachine: (ha-274394-m02)     
	I0428 23:57:40.585333   36356 main.go:141] libmachine: (ha-274394-m02)   </devices>
	I0428 23:57:40.585338   36356 main.go:141] libmachine: (ha-274394-m02) </domain>
	I0428 23:57:40.585349   36356 main.go:141] libmachine: (ha-274394-m02) 
	I0428 23:57:40.591951   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:29:fa:a1 in network default
	I0428 23:57:40.592539   36356 main.go:141] libmachine: (ha-274394-m02) Ensuring networks are active...
	I0428 23:57:40.592572   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:40.593208   36356 main.go:141] libmachine: (ha-274394-m02) Ensuring network default is active
	I0428 23:57:40.593513   36356 main.go:141] libmachine: (ha-274394-m02) Ensuring network mk-ha-274394 is active
	I0428 23:57:40.593859   36356 main.go:141] libmachine: (ha-274394-m02) Getting domain xml...
	I0428 23:57:40.594509   36356 main.go:141] libmachine: (ha-274394-m02) Creating domain...
	I0428 23:57:41.836657   36356 main.go:141] libmachine: (ha-274394-m02) Waiting to get IP...
	I0428 23:57:41.837571   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:41.838045   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:41.838120   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:41.838015   36756 retry.go:31] will retry after 263.733241ms: waiting for machine to come up
	I0428 23:57:42.105185   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:42.105687   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:42.105724   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:42.105649   36756 retry.go:31] will retry after 331.1126ms: waiting for machine to come up
	I0428 23:57:42.438029   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:42.438463   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:42.438490   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:42.438436   36756 retry.go:31] will retry after 446.032628ms: waiting for machine to come up
	I0428 23:57:42.886123   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:42.886522   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:42.886551   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:42.886483   36756 retry.go:31] will retry after 461.928323ms: waiting for machine to come up
	I0428 23:57:43.350246   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:43.350746   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:43.350773   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:43.350696   36756 retry.go:31] will retry after 703.683282ms: waiting for machine to come up
	I0428 23:57:44.055920   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:44.056329   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:44.056361   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:44.056286   36756 retry.go:31] will retry after 903.640049ms: waiting for machine to come up
	I0428 23:57:44.961160   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:44.961635   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:44.961664   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:44.961581   36756 retry.go:31] will retry after 931.278913ms: waiting for machine to come up
	I0428 23:57:45.894066   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:45.894506   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:45.894535   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:45.894451   36756 retry.go:31] will retry after 1.279366183s: waiting for machine to come up
	I0428 23:57:47.174982   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:47.175538   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:47.175570   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:47.175475   36756 retry.go:31] will retry after 1.506197273s: waiting for machine to come up
	I0428 23:57:48.683913   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:48.684413   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:48.684452   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:48.684371   36756 retry.go:31] will retry after 2.323617854s: waiting for machine to come up
	I0428 23:57:51.009605   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:51.010052   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:51.010079   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:51.010011   36756 retry.go:31] will retry after 2.511993371s: waiting for machine to come up
	I0428 23:57:53.524618   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:53.524963   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:53.524989   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:53.524930   36756 retry.go:31] will retry after 2.984005541s: waiting for machine to come up
	I0428 23:57:56.510802   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:56.511159   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:56.511208   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:56.511109   36756 retry.go:31] will retry after 3.975363933s: waiting for machine to come up
	I0428 23:58:00.488249   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:00.488659   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:58:00.488699   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:58:00.488635   36756 retry.go:31] will retry after 4.708905436s: waiting for machine to come up
	I0428 23:58:05.199518   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:05.200038   36356 main.go:141] libmachine: (ha-274394-m02) Found IP for machine: 192.168.39.43
	I0428 23:58:05.200069   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has current primary IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:05.200075   36356 main.go:141] libmachine: (ha-274394-m02) Reserving static IP address...
	I0428 23:58:05.200401   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find host DHCP lease matching {name: "ha-274394-m02", mac: "52:54:00:94:ad:64", ip: "192.168.39.43"} in network mk-ha-274394
	I0428 23:58:05.271102   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Getting to WaitForSSH function...
	I0428 23:58:05.271136   36356 main.go:141] libmachine: (ha-274394-m02) Reserved static IP address: 192.168.39.43
	I0428 23:58:05.271154   36356 main.go:141] libmachine: (ha-274394-m02) Waiting for SSH to be available...
	I0428 23:58:05.273658   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:05.274071   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394
	I0428 23:58:05.274110   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find defined IP address of network mk-ha-274394 interface with MAC address 52:54:00:94:ad:64
	I0428 23:58:05.274190   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Using SSH client type: external
	I0428 23:58:05.274217   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa (-rw-------)
	I0428 23:58:05.274244   36356 main.go:141] libmachine: (ha-274394-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0428 23:58:05.274262   36356 main.go:141] libmachine: (ha-274394-m02) DBG | About to run SSH command:
	I0428 23:58:05.274279   36356 main.go:141] libmachine: (ha-274394-m02) DBG | exit 0
	I0428 23:58:05.277779   36356 main.go:141] libmachine: (ha-274394-m02) DBG | SSH cmd err, output: exit status 255: 
	I0428 23:58:05.277800   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0428 23:58:05.277808   36356 main.go:141] libmachine: (ha-274394-m02) DBG | command : exit 0
	I0428 23:58:05.277813   36356 main.go:141] libmachine: (ha-274394-m02) DBG | err     : exit status 255
	I0428 23:58:05.277834   36356 main.go:141] libmachine: (ha-274394-m02) DBG | output  : 
	I0428 23:58:08.279936   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Getting to WaitForSSH function...
	I0428 23:58:08.282287   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.282606   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.282638   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.282767   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Using SSH client type: external
	I0428 23:58:08.282790   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa (-rw-------)
	I0428 23:58:08.282817   36356 main.go:141] libmachine: (ha-274394-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0428 23:58:08.282831   36356 main.go:141] libmachine: (ha-274394-m02) DBG | About to run SSH command:
	I0428 23:58:08.282847   36356 main.go:141] libmachine: (ha-274394-m02) DBG | exit 0
	I0428 23:58:08.406493   36356 main.go:141] libmachine: (ha-274394-m02) DBG | SSH cmd err, output: <nil>: 
	I0428 23:58:08.406751   36356 main.go:141] libmachine: (ha-274394-m02) KVM machine creation complete!
	I0428 23:58:08.407023   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetConfigRaw
	I0428 23:58:08.407546   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:08.407752   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:08.407917   36356 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0428 23:58:08.407949   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0428 23:58:08.409148   36356 main.go:141] libmachine: Detecting operating system of created instance...
	I0428 23:58:08.409163   36356 main.go:141] libmachine: Waiting for SSH to be available...
	I0428 23:58:08.409170   36356 main.go:141] libmachine: Getting to WaitForSSH function...
	I0428 23:58:08.409176   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:08.411283   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.411639   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.411666   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.411790   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:08.411958   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.412074   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.412205   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:08.412341   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:08.412556   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:08.412567   36356 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0428 23:58:08.517856   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:58:08.517876   36356 main.go:141] libmachine: Detecting the provisioner...
	I0428 23:58:08.517884   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:08.520659   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.521074   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.521104   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.521249   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:08.521452   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.521595   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.521718   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:08.521891   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:08.522108   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:08.522122   36356 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0428 23:58:08.623755   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0428 23:58:08.623844   36356 main.go:141] libmachine: found compatible host: buildroot
	I0428 23:58:08.623859   36356 main.go:141] libmachine: Provisioning with buildroot...
	I0428 23:58:08.623869   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetMachineName
	I0428 23:58:08.624132   36356 buildroot.go:166] provisioning hostname "ha-274394-m02"
	I0428 23:58:08.624159   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetMachineName
	I0428 23:58:08.624360   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:08.626758   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.627168   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.627207   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.627320   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:08.627519   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.627679   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.627799   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:08.627986   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:08.628168   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:08.628185   36356 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-274394-m02 && echo "ha-274394-m02" | sudo tee /etc/hostname
	I0428 23:58:08.748721   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-274394-m02
	
	I0428 23:58:08.748751   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:08.751328   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.751725   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.751754   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.751921   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:08.752118   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.752289   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.752435   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:08.752591   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:08.752746   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:08.752761   36356 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-274394-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-274394-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-274394-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 23:58:08.865693   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:58:08.865735   36356 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0428 23:58:08.865752   36356 buildroot.go:174] setting up certificates
	I0428 23:58:08.865761   36356 provision.go:84] configureAuth start
	I0428 23:58:08.865770   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetMachineName
	I0428 23:58:08.866040   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0428 23:58:08.868779   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.869186   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.869215   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.869353   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:08.871473   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.871800   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.871832   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.871946   36356 provision.go:143] copyHostCerts
	I0428 23:58:08.871975   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:58:08.872008   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0428 23:58:08.872018   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:58:08.872094   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0428 23:58:08.872213   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:58:08.872239   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0428 23:58:08.872244   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:58:08.872278   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0428 23:58:08.872365   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:58:08.872389   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0428 23:58:08.872398   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:58:08.872430   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0428 23:58:08.872508   36356 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.ha-274394-m02 san=[127.0.0.1 192.168.39.43 ha-274394-m02 localhost minikube]
	I0428 23:58:09.052110   36356 provision.go:177] copyRemoteCerts
	I0428 23:58:09.052164   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 23:58:09.052184   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:09.054860   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.055216   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.055240   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.055399   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.055567   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.055717   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.055858   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	I0428 23:58:09.137022   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0428 23:58:09.137100   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 23:58:09.167033   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0428 23:58:09.167092   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 23:58:09.196003   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0428 23:58:09.196052   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 23:58:09.225865   36356 provision.go:87] duration metric: took 360.094398ms to configureAuth
	I0428 23:58:09.225900   36356 buildroot.go:189] setting minikube options for container-runtime
	I0428 23:58:09.226133   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:58:09.226208   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:09.228933   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.229315   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.229339   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.229568   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.229766   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.229900   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.230040   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.230170   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:09.230388   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:09.230411   36356 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0428 23:58:09.507505   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0428 23:58:09.507533   36356 main.go:141] libmachine: Checking connection to Docker...
	I0428 23:58:09.507544   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetURL
	I0428 23:58:09.508981   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Using libvirt version 6000000
	I0428 23:58:09.511365   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.511820   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.511847   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.511983   36356 main.go:141] libmachine: Docker is up and running!
	I0428 23:58:09.511995   36356 main.go:141] libmachine: Reticulating splines...
	I0428 23:58:09.512002   36356 client.go:171] duration metric: took 29.317754136s to LocalClient.Create
	I0428 23:58:09.512028   36356 start.go:167] duration metric: took 29.317822967s to libmachine.API.Create "ha-274394"
	I0428 23:58:09.512041   36356 start.go:293] postStartSetup for "ha-274394-m02" (driver="kvm2")
	I0428 23:58:09.512058   36356 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 23:58:09.512081   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:09.512308   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 23:58:09.512333   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:09.514486   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.514786   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.514819   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.514890   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.515065   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.515222   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.515394   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	I0428 23:58:09.598106   36356 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 23:58:09.603534   36356 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 23:58:09.605400   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0428 23:58:09.605465   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0428 23:58:09.605532   36356 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0428 23:58:09.605541   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /etc/ssl/certs/207272.pem
	I0428 23:58:09.605627   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 23:58:09.616520   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:58:09.648817   36356 start.go:296] duration metric: took 136.751105ms for postStartSetup
	I0428 23:58:09.648864   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetConfigRaw
	I0428 23:58:09.649443   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0428 23:58:09.651782   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.652097   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.652145   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.652400   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:58:09.652581   36356 start.go:128] duration metric: took 29.476676023s to createHost
	I0428 23:58:09.652603   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:09.654816   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.655121   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.655141   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.655311   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.655499   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.655654   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.655785   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.655923   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:09.656090   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:09.656101   36356 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 23:58:09.763749   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714348689.739838626
	
	I0428 23:58:09.763772   36356 fix.go:216] guest clock: 1714348689.739838626
	I0428 23:58:09.763782   36356 fix.go:229] Guest: 2024-04-28 23:58:09.739838626 +0000 UTC Remote: 2024-04-28 23:58:09.652593063 +0000 UTC m=+85.098481504 (delta=87.245563ms)
	I0428 23:58:09.763801   36356 fix.go:200] guest clock delta is within tolerance: 87.245563ms
	I0428 23:58:09.763808   36356 start.go:83] releasing machines lock for "ha-274394-m02", held for 29.58803473s
	I0428 23:58:09.763831   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:09.764088   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0428 23:58:09.766409   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.766722   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.766751   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.769042   36356 out.go:177] * Found network options:
	I0428 23:58:09.770388   36356 out.go:177]   - NO_PROXY=192.168.39.237
	W0428 23:58:09.771614   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 23:58:09.771670   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:09.772270   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:09.772475   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:09.772539   36356 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 23:58:09.772591   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	W0428 23:58:09.772706   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 23:58:09.772781   36356 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0428 23:58:09.772803   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:09.775069   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.775442   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.775474   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.775498   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.775608   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.775788   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.775865   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.775888   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.775969   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.776049   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.776112   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	I0428 23:58:09.776192   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.776354   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.776515   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	I0428 23:58:10.017919   36356 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 23:58:10.025250   36356 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 23:58:10.025319   36356 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 23:58:10.042207   36356 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 23:58:10.042227   36356 start.go:494] detecting cgroup driver to use...
	I0428 23:58:10.042298   36356 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 23:58:10.060095   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 23:58:10.074396   36356 docker.go:217] disabling cri-docker service (if available) ...
	I0428 23:58:10.074438   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0428 23:58:10.089348   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0428 23:58:10.105801   36356 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0428 23:58:10.231914   36356 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0428 23:58:10.370359   36356 docker.go:233] disabling docker service ...
	I0428 23:58:10.370433   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0428 23:58:10.387029   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0428 23:58:10.401713   36356 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0428 23:58:10.545671   36356 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0428 23:58:10.673835   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0428 23:58:10.690495   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 23:58:10.713136   36356 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0428 23:58:10.713195   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.724228   36356 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0428 23:58:10.724289   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.734841   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.745343   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.755951   36356 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 23:58:10.769464   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.780518   36356 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.800846   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.813387   36356 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 23:58:10.824342   36356 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0428 23:58:10.824386   36356 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0428 23:58:10.840504   36356 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 23:58:10.850816   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:58:11.002082   36356 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0428 23:58:11.150506   36356 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0428 23:58:11.150580   36356 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0428 23:58:11.155694   36356 start.go:562] Will wait 60s for crictl version
	I0428 23:58:11.155737   36356 ssh_runner.go:195] Run: which crictl
	I0428 23:58:11.159794   36356 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 23:58:11.198604   36356 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0428 23:58:11.198662   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:58:11.227554   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:58:11.259462   36356 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0428 23:58:11.261048   36356 out.go:177]   - env NO_PROXY=192.168.39.237
	I0428 23:58:11.262197   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0428 23:58:11.264686   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:11.265028   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:11.265066   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:11.265314   36356 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0428 23:58:11.269635   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:58:11.284122   36356 mustload.go:65] Loading cluster: ha-274394
	I0428 23:58:11.284320   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:58:11.284574   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:58:11.284606   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:58:11.299185   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43987
	I0428 23:58:11.299552   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:58:11.300015   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:58:11.300035   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:58:11.300322   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:58:11.300512   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:58:11.302241   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:58:11.302540   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:58:11.302569   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:58:11.316673   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0428 23:58:11.317081   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:58:11.317581   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:58:11.317603   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:58:11.317957   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:58:11.318147   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:58:11.318306   36356 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394 for IP: 192.168.39.43
	I0428 23:58:11.318321   36356 certs.go:194] generating shared ca certs ...
	I0428 23:58:11.318343   36356 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:58:11.318474   36356 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0428 23:58:11.318509   36356 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0428 23:58:11.318518   36356 certs.go:256] generating profile certs ...
	I0428 23:58:11.318589   36356 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key
	I0428 23:58:11.318612   36356 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.7e238c0c
	I0428 23:58:11.318627   36356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.7e238c0c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237 192.168.39.43 192.168.39.254]
	I0428 23:58:11.545721   36356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.7e238c0c ...
	I0428 23:58:11.545748   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.7e238c0c: {Name:mkeed2aa96bd12faaef131331a07f70de364149a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:58:11.545910   36356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.7e238c0c ...
	I0428 23:58:11.545924   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.7e238c0c: {Name:mk7099ae4bf57427dc8efa8eca1c99f9dfbcfc1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:58:11.545987   36356 certs.go:381] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.7e238c0c -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt
	I0428 23:58:11.546128   36356 certs.go:385] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.7e238c0c -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key
	I0428 23:58:11.546251   36356 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key
	I0428 23:58:11.546266   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 23:58:11.546283   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0428 23:58:11.546302   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 23:58:11.546314   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 23:58:11.546327   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 23:58:11.546339   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 23:58:11.546356   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 23:58:11.546367   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 23:58:11.546440   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0428 23:58:11.546474   36356 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0428 23:58:11.546484   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0428 23:58:11.546515   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0428 23:58:11.546544   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0428 23:58:11.546575   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0428 23:58:11.546612   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:58:11.546640   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /usr/share/ca-certificates/207272.pem
	I0428 23:58:11.546660   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:58:11.546673   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem -> /usr/share/ca-certificates/20727.pem
	I0428 23:58:11.546701   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:58:11.549269   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:58:11.549627   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:58:11.549651   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:58:11.549864   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:58:11.550078   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:58:11.550246   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:58:11.550386   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:58:11.626257   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0428 23:58:11.633493   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0428 23:58:11.648763   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0428 23:58:11.653913   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0428 23:58:11.666682   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0428 23:58:11.671203   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0428 23:58:11.683465   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0428 23:58:11.688416   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0428 23:58:11.700908   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0428 23:58:11.705691   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0428 23:58:11.718671   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0428 23:58:11.723458   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0428 23:58:11.736255   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 23:58:11.765294   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0428 23:58:11.790622   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 23:58:11.815237   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 23:58:11.840522   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0428 23:58:11.866184   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 23:58:11.892486   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 23:58:11.919387   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0428 23:58:11.945021   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0428 23:58:11.971444   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 23:58:11.998626   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0428 23:58:12.027890   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0428 23:58:12.047360   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0428 23:58:12.066886   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0428 23:58:12.085348   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0428 23:58:12.103655   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0428 23:58:12.122468   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0428 23:58:12.140788   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0428 23:58:12.159242   36356 ssh_runner.go:195] Run: openssl version
	I0428 23:58:12.165224   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0428 23:58:12.177844   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0428 23:58:12.183316   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0428 23:58:12.183365   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0428 23:58:12.190660   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 23:58:12.204445   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 23:58:12.218304   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:58:12.223358   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:58:12.223409   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:58:12.229577   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 23:58:12.243231   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0428 23:58:12.256708   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0428 23:58:12.261981   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0428 23:58:12.262039   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0428 23:58:12.268393   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0428 23:58:12.280239   36356 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 23:58:12.284658   36356 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 23:58:12.284714   36356 kubeadm.go:928] updating node {m02 192.168.39.43 8443 v1.30.0 crio true true} ...
	I0428 23:58:12.284819   36356 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-274394-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 23:58:12.284857   36356 kube-vip.go:111] generating kube-vip config ...
	I0428 23:58:12.284892   36356 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 23:58:12.305290   36356 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 23:58:12.305341   36356 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0428 23:58:12.305383   36356 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 23:58:12.316898   36356 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0428 23:58:12.316947   36356 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0428 23:58:12.329307   36356 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0428 23:58:12.329324   36356 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0428 23:58:12.329342   36356 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0428 23:58:12.329329   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0428 23:58:12.329504   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0428 23:58:12.335405   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0428 23:58:12.335438   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0428 23:58:14.045229   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0428 23:58:14.045312   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0428 23:58:14.051404   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0428 23:58:14.051439   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0428 23:58:15.812706   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 23:58:15.829608   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0428 23:58:15.829706   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0428 23:58:15.834283   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0428 23:58:15.834318   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0428 23:58:16.307677   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0428 23:58:16.319127   36356 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0428 23:58:16.341843   36356 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 23:58:16.362469   36356 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0428 23:58:16.382263   36356 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0428 23:58:16.386704   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:58:16.399847   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:58:16.542357   36356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:58:16.562649   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:58:16.563136   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:58:16.563183   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:58:16.578899   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0428 23:58:16.579324   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:58:16.579790   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:58:16.579815   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:58:16.580113   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:58:16.580286   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:58:16.580432   36356 start.go:316] joinCluster: &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:58:16.580525   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0428 23:58:16.580547   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:58:16.583320   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:58:16.583742   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:58:16.583771   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:58:16.583929   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:58:16.584086   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:58:16.584266   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:58:16.584415   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:58:16.761399   36356 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:58:16.761453   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kta4yl.dkqb9qr4g4gf2lc7 --discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-274394-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443"
	I0428 23:58:38.387908   36356 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kta4yl.dkqb9qr4g4gf2lc7 --discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-274394-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443": (21.626432622s)
	I0428 23:58:38.387953   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0428 23:58:39.010776   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-274394-m02 minikube.k8s.io/updated_at=2024_04_28T23_58_39_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-274394 minikube.k8s.io/primary=false
	I0428 23:58:39.142412   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-274394-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0428 23:58:39.293444   36356 start.go:318] duration metric: took 22.713007972s to joinCluster
	I0428 23:58:39.293513   36356 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:58:39.295333   36356 out.go:177] * Verifying Kubernetes components...
	I0428 23:58:39.293856   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:58:39.296894   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:58:39.590067   36356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:58:39.653930   36356 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:58:39.654223   36356 kapi.go:59] client config for ha-274394: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt", KeyFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key", CAFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0428 23:58:39.654290   36356 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.237:8443
	I0428 23:58:39.654479   36356 node_ready.go:35] waiting up to 6m0s for node "ha-274394-m02" to be "Ready" ...
	I0428 23:58:39.654555   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:39.654563   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:39.654570   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:39.654574   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:39.664649   36356 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0428 23:58:40.155296   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:40.155331   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:40.155342   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:40.155348   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:40.172701   36356 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0428 23:58:40.655311   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:40.655338   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:40.655350   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:40.655361   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:40.661218   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 23:58:41.155679   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:41.155700   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:41.155710   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:41.155713   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:41.159217   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:41.654979   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:41.655002   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:41.655011   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:41.655017   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:41.658216   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:41.659155   36356 node_ready.go:53] node "ha-274394-m02" has status "Ready":"False"
	I0428 23:58:42.155563   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:42.155591   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:42.155602   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:42.155608   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:42.159309   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:42.655232   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:42.655250   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:42.655258   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:42.655262   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:42.658587   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:43.155324   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:43.155378   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:43.155392   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:43.155397   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:43.160299   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:43.655539   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:43.655559   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:43.655567   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:43.655570   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:43.659388   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:43.660385   36356 node_ready.go:53] node "ha-274394-m02" has status "Ready":"False"
	I0428 23:58:44.154684   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:44.154707   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:44.154715   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:44.154720   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:44.158344   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:44.655229   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:44.684950   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:44.684969   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:44.684976   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:44.689026   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:45.155245   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:45.155266   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:45.155273   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:45.155278   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:45.158906   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:45.655245   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:45.655265   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:45.655272   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:45.655277   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:45.659245   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:46.155290   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:46.155311   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:46.155322   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:46.155328   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:46.160353   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 23:58:46.161167   36356 node_ready.go:53] node "ha-274394-m02" has status "Ready":"False"
	I0428 23:58:46.654806   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:46.654828   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:46.654835   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:46.654839   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:46.658180   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:47.155464   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:47.155486   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:47.155494   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:47.155498   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:47.160773   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 23:58:47.654818   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:47.654843   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:47.654850   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:47.654855   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:47.658410   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.154672   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:48.154697   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.154706   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.154710   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.159615   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:48.160955   36356 node_ready.go:49] node "ha-274394-m02" has status "Ready":"True"
	I0428 23:58:48.160973   36356 node_ready.go:38] duration metric: took 8.506473788s for node "ha-274394-m02" to be "Ready" ...
	I0428 23:58:48.160982   36356 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 23:58:48.161046   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0428 23:58:48.161055   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.161062   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.161068   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.169896   36356 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0428 23:58:48.176675   36356 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.176747   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rslhx
	I0428 23:58:48.176759   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.176765   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.176768   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.179899   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.180719   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:48.180735   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.180742   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.180747   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.184065   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.184638   36356 pod_ready.go:92] pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:48.184657   36356 pod_ready.go:81] duration metric: took 7.958913ms for pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.184666   36356 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.184714   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xkdcv
	I0428 23:58:48.184722   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.184730   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.184736   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.195110   36356 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0428 23:58:48.195972   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:48.195992   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.195999   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.196003   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.199213   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.199713   36356 pod_ready.go:92] pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:48.199733   36356 pod_ready.go:81] duration metric: took 15.060231ms for pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.199747   36356 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.199805   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394
	I0428 23:58:48.199815   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.199821   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.199825   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.203469   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.204875   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:48.204891   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.204898   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.204902   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.208879   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.209993   36356 pod_ready.go:92] pod "etcd-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:48.210011   36356 pod_ready.go:81] duration metric: took 10.253451ms for pod "etcd-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.210037   36356 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.210104   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:48.210112   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.210118   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.210123   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.212475   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:48.213184   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:48.213196   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.213203   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.213206   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.216781   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.710847   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:48.710869   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.710877   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.710881   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.717464   36356 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 23:58:48.718367   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:48.718385   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.718395   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.718402   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.721385   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:49.210458   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:49.210481   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:49.210488   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:49.210492   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:49.214589   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:49.215523   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:49.215538   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:49.215545   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:49.215549   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:49.218124   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:49.710303   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:49.710327   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:49.710334   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:49.710340   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:49.714259   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:49.715063   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:49.715080   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:49.715088   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:49.715092   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:49.718099   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:50.211244   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:50.211267   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:50.211277   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:50.211285   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:50.215809   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:50.216675   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:50.216695   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:50.216705   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:50.216711   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:50.220299   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:50.220861   36356 pod_ready.go:102] pod "etcd-ha-274394-m02" in "kube-system" namespace has status "Ready":"False"
	I0428 23:58:50.710227   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:50.710254   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:50.710266   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:50.710273   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:50.714777   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:50.715631   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:50.715647   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:50.715656   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:50.715661   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:50.718977   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:51.210455   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:51.210484   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:51.210492   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:51.210495   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:51.214863   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:51.215676   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:51.215695   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:51.215705   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:51.215711   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:51.219164   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:51.710844   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:51.710866   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:51.710874   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:51.710878   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:51.714505   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:51.715380   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:51.715395   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:51.715402   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:51.715405   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:51.718737   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:52.210936   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:52.210960   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:52.210969   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:52.210973   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:52.214315   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:52.215246   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:52.215264   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:52.215271   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:52.215276   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:52.217789   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:52.710926   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:52.710951   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:52.710960   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:52.710972   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:52.714784   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:52.715778   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:52.715797   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:52.715805   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:52.715810   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:52.718927   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:52.719705   36356 pod_ready.go:102] pod "etcd-ha-274394-m02" in "kube-system" namespace has status "Ready":"False"
	I0428 23:58:53.211232   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:53.211273   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:53.211284   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:53.211289   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:53.215284   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:53.215955   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:53.215970   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:53.215976   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:53.215980   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:53.218518   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:53.710221   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:53.710244   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:53.710254   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:53.710259   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:53.713769   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:53.714799   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:53.714816   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:53.714826   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:53.714831   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:53.717753   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:54.210997   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:54.211027   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:54.211035   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:54.211039   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:54.214689   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:54.215567   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:54.215581   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:54.215587   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:54.215592   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:54.219396   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:54.710980   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:54.710999   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:54.711006   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:54.711010   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:54.714596   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:54.715401   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:54.715416   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:54.715421   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:54.715425   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:54.718723   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:55.211159   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:55.211186   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:55.211199   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:55.211207   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:55.215703   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:55.216484   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:55.216501   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:55.216507   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:55.216511   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:55.219394   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:55.220002   36356 pod_ready.go:102] pod "etcd-ha-274394-m02" in "kube-system" namespace has status "Ready":"False"
	I0428 23:58:55.710286   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:55.710320   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:55.710330   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:55.710335   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:55.714152   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:55.715422   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:55.715435   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:55.715442   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:55.715446   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:55.718421   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:56.210425   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:56.210447   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:56.210455   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:56.210459   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:56.214208   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:56.215146   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:56.215160   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:56.215167   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:56.215171   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:56.217756   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:56.710994   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:56.711013   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:56.711021   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:56.711024   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:56.713949   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:56.714853   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:56.714867   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:56.714872   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:56.714876   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:56.717415   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.210188   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:57.210211   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.210219   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.210223   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.213897   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:57.215006   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:57.215024   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.215033   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.215039   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.217552   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.218220   36356 pod_ready.go:92] pod "etcd-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.218236   36356 pod_ready.go:81] duration metric: took 9.008187231s for pod "etcd-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.218250   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.218295   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-274394
	I0428 23:58:57.218302   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.218308   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.218315   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.220629   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.221425   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:57.221443   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.221453   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.221462   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.223509   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.224113   36356 pod_ready.go:92] pod "kube-apiserver-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.224133   36356 pod_ready.go:81] duration metric: took 5.873511ms for pod "kube-apiserver-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.224144   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.224215   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-274394-m02
	I0428 23:58:57.224227   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.224236   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.224244   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.226285   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.227060   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:57.227075   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.227082   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.227087   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.229206   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.229767   36356 pod_ready.go:92] pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.229788   36356 pod_ready.go:81] duration metric: took 5.632505ms for pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.229799   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.229849   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394
	I0428 23:58:57.229858   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.229864   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.229868   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.232892   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:57.233655   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:57.233670   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.233676   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.233682   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.235860   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.236478   36356 pod_ready.go:92] pod "kube-controller-manager-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.236500   36356 pod_ready.go:81] duration metric: took 6.69293ms for pod "kube-controller-manager-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.236513   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.236567   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394-m02
	I0428 23:58:57.236582   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.236591   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.236603   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.239009   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.239661   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:57.239676   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.239684   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.239691   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.242103   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.242658   36356 pod_ready.go:92] pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.242674   36356 pod_ready.go:81] duration metric: took 6.151599ms for pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.242681   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g95c9" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.411103   36356 request.go:629] Waited for 168.362894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95c9
	I0428 23:58:57.411156   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95c9
	I0428 23:58:57.411161   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.411169   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.411174   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.414521   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:57.610626   36356 request.go:629] Waited for 195.367099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:57.610752   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:57.610822   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.610833   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.610846   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.615056   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:57.615713   36356 pod_ready.go:92] pod "kube-proxy-g95c9" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.615731   36356 pod_ready.go:81] duration metric: took 373.044367ms for pod "kube-proxy-g95c9" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.615740   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pwbfs" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.811010   36356 request.go:629] Waited for 195.183352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwbfs
	I0428 23:58:57.811064   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwbfs
	I0428 23:58:57.811068   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.811076   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.811081   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.815095   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:58.010219   36356 request.go:629] Waited for 194.281833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:58.010339   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:58.010358   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.010365   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.010370   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.014383   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:58.015289   36356 pod_ready.go:92] pod "kube-proxy-pwbfs" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:58.015307   36356 pod_ready.go:81] duration metric: took 399.560892ms for pod "kube-proxy-pwbfs" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:58.015315   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:58.210510   36356 request.go:629] Waited for 195.105309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394
	I0428 23:58:58.210572   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394
	I0428 23:58:58.210577   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.210583   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.210588   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.215302   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:58.410679   36356 request.go:629] Waited for 194.371002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:58.410749   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:58.410755   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.410764   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.410770   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.414880   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:58.415576   36356 pod_ready.go:92] pod "kube-scheduler-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:58.415594   36356 pod_ready.go:81] duration metric: took 400.27299ms for pod "kube-scheduler-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:58.415604   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:58.610680   36356 request.go:629] Waited for 195.022143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m02
	I0428 23:58:58.610745   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m02
	I0428 23:58:58.610751   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.610756   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.610760   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.615040   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:58.811088   36356 request.go:629] Waited for 195.345352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:58.811150   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:58.811156   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.811167   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.811171   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.814458   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:58.815300   36356 pod_ready.go:92] pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:58.815317   36356 pod_ready.go:81] duration metric: took 399.706734ms for pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:58.815328   36356 pod_ready.go:38] duration metric: took 10.654327215s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 23:58:58.815340   36356 api_server.go:52] waiting for apiserver process to appear ...
	I0428 23:58:58.815386   36356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 23:58:58.831473   36356 api_server.go:72] duration metric: took 19.537927218s to wait for apiserver process to appear ...
	I0428 23:58:58.831505   36356 api_server.go:88] waiting for apiserver healthz status ...
	I0428 23:58:58.831530   36356 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0428 23:58:58.836498   36356 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0428 23:58:58.836579   36356 round_trippers.go:463] GET https://192.168.39.237:8443/version
	I0428 23:58:58.836595   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.836613   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.836621   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.837583   36356 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0428 23:58:58.837822   36356 api_server.go:141] control plane version: v1.30.0
	I0428 23:58:58.837849   36356 api_server.go:131] duration metric: took 6.335764ms to wait for apiserver health ...
	I0428 23:58:58.837859   36356 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 23:58:59.010251   36356 request.go:629] Waited for 172.319916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0428 23:58:59.010324   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0428 23:58:59.010354   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:59.010369   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:59.010377   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:59.016252   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 23:58:59.023090   36356 system_pods.go:59] 17 kube-system pods found
	I0428 23:58:59.023126   36356 system_pods.go:61] "coredns-7db6d8ff4d-rslhx" [b73501ce-7591-45a5-b59e-331f7752c71b] Running
	I0428 23:58:59.023132   36356 system_pods.go:61] "coredns-7db6d8ff4d-xkdcv" [60272694-edd8-4a8c-abd9-707cdb1355ea] Running
	I0428 23:58:59.023136   36356 system_pods.go:61] "etcd-ha-274394" [e951aad6-16ba-42de-bcb6-a90ec5388fc8] Running
	I0428 23:58:59.023140   36356 system_pods.go:61] "etcd-ha-274394-m02" [63565823-56bf-4bd7-b8da-604a1b0d4d30] Running
	I0428 23:58:59.023143   36356 system_pods.go:61] "kindnet-6qf7q" [f00be25f-cefa-41ac-8c38-1d52f337e8b9] Running
	I0428 23:58:59.023146   36356 system_pods.go:61] "kindnet-p6qmw" [528219cb-5850-471c-97de-c31dcca693b1] Running
	I0428 23:58:59.023150   36356 system_pods.go:61] "kube-apiserver-ha-274394" [f20281d2-0f10-43b0-9a51-495d03b5a5c3] Running
	I0428 23:58:59.023155   36356 system_pods.go:61] "kube-apiserver-ha-274394-m02" [0f8b7b21-a990-447f-a3b8-6acdccf078d3] Running
	I0428 23:58:59.023158   36356 system_pods.go:61] "kube-controller-manager-ha-274394" [8fb69743-3a7b-4fad-838c-a45e1667724c] Running
	I0428 23:58:59.023161   36356 system_pods.go:61] "kube-controller-manager-ha-274394-m02" [429f2ab6-9771-47b2-b827-d183897f6276] Running
	I0428 23:58:59.023167   36356 system_pods.go:61] "kube-proxy-g95c9" [5be866d8-0014-44c7-a4cd-e93655e9c599] Running
	I0428 23:58:59.023172   36356 system_pods.go:61] "kube-proxy-pwbfs" [5303f947-6c3f-47b5-b396-33b92049d48f] Running
	I0428 23:58:59.023175   36356 system_pods.go:61] "kube-scheduler-ha-274394" [22d206f5-49cc-43d0-939e-249961518bb4] Running
	I0428 23:58:59.023180   36356 system_pods.go:61] "kube-scheduler-ha-274394-m02" [3371d359-adb1-4111-8ae1-44934bad24c3] Running
	I0428 23:58:59.023183   36356 system_pods.go:61] "kube-vip-ha-274394" [ce6151de-754a-4f15-94d4-71f4fb9cbd21] Running
	I0428 23:58:59.023186   36356 system_pods.go:61] "kube-vip-ha-274394-m02" [f276f128-37bf-4f93-a573-e6b491fa66cd] Running
	I0428 23:58:59.023189   36356 system_pods.go:61] "storage-provisioner" [b291d6ca-3a9b-4dd0-b0e9-a183347e7d26] Running
	I0428 23:58:59.023194   36356 system_pods.go:74] duration metric: took 185.326461ms to wait for pod list to return data ...
	I0428 23:58:59.023207   36356 default_sa.go:34] waiting for default service account to be created ...
	I0428 23:58:59.210913   36356 request.go:629] Waited for 187.648663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0428 23:58:59.210979   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0428 23:58:59.210993   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:59.211002   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:59.211013   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:59.214865   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:59.215086   36356 default_sa.go:45] found service account: "default"
	I0428 23:58:59.215102   36356 default_sa.go:55] duration metric: took 191.890036ms for default service account to be created ...
	I0428 23:58:59.215110   36356 system_pods.go:116] waiting for k8s-apps to be running ...
	I0428 23:58:59.410522   36356 request.go:629] Waited for 195.32449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0428 23:58:59.410587   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0428 23:58:59.410592   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:59.410599   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:59.410603   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:59.416485   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 23:58:59.422169   36356 system_pods.go:86] 17 kube-system pods found
	I0428 23:58:59.422199   36356 system_pods.go:89] "coredns-7db6d8ff4d-rslhx" [b73501ce-7591-45a5-b59e-331f7752c71b] Running
	I0428 23:58:59.422207   36356 system_pods.go:89] "coredns-7db6d8ff4d-xkdcv" [60272694-edd8-4a8c-abd9-707cdb1355ea] Running
	I0428 23:58:59.422214   36356 system_pods.go:89] "etcd-ha-274394" [e951aad6-16ba-42de-bcb6-a90ec5388fc8] Running
	I0428 23:58:59.422220   36356 system_pods.go:89] "etcd-ha-274394-m02" [63565823-56bf-4bd7-b8da-604a1b0d4d30] Running
	I0428 23:58:59.422226   36356 system_pods.go:89] "kindnet-6qf7q" [f00be25f-cefa-41ac-8c38-1d52f337e8b9] Running
	I0428 23:58:59.422232   36356 system_pods.go:89] "kindnet-p6qmw" [528219cb-5850-471c-97de-c31dcca693b1] Running
	I0428 23:58:59.422237   36356 system_pods.go:89] "kube-apiserver-ha-274394" [f20281d2-0f10-43b0-9a51-495d03b5a5c3] Running
	I0428 23:58:59.422243   36356 system_pods.go:89] "kube-apiserver-ha-274394-m02" [0f8b7b21-a990-447f-a3b8-6acdccf078d3] Running
	I0428 23:58:59.422251   36356 system_pods.go:89] "kube-controller-manager-ha-274394" [8fb69743-3a7b-4fad-838c-a45e1667724c] Running
	I0428 23:58:59.422265   36356 system_pods.go:89] "kube-controller-manager-ha-274394-m02" [429f2ab6-9771-47b2-b827-d183897f6276] Running
	I0428 23:58:59.422275   36356 system_pods.go:89] "kube-proxy-g95c9" [5be866d8-0014-44c7-a4cd-e93655e9c599] Running
	I0428 23:58:59.422283   36356 system_pods.go:89] "kube-proxy-pwbfs" [5303f947-6c3f-47b5-b396-33b92049d48f] Running
	I0428 23:58:59.422293   36356 system_pods.go:89] "kube-scheduler-ha-274394" [22d206f5-49cc-43d0-939e-249961518bb4] Running
	I0428 23:58:59.422300   36356 system_pods.go:89] "kube-scheduler-ha-274394-m02" [3371d359-adb1-4111-8ae1-44934bad24c3] Running
	I0428 23:58:59.422310   36356 system_pods.go:89] "kube-vip-ha-274394" [ce6151de-754a-4f15-94d4-71f4fb9cbd21] Running
	I0428 23:58:59.422316   36356 system_pods.go:89] "kube-vip-ha-274394-m02" [f276f128-37bf-4f93-a573-e6b491fa66cd] Running
	I0428 23:58:59.422325   36356 system_pods.go:89] "storage-provisioner" [b291d6ca-3a9b-4dd0-b0e9-a183347e7d26] Running
	I0428 23:58:59.422337   36356 system_pods.go:126] duration metric: took 207.21932ms to wait for k8s-apps to be running ...
	I0428 23:58:59.422349   36356 system_svc.go:44] waiting for kubelet service to be running ....
	I0428 23:58:59.422404   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 23:58:59.441950   36356 system_svc.go:56] duration metric: took 19.591591ms WaitForService to wait for kubelet
	I0428 23:58:59.441982   36356 kubeadm.go:576] duration metric: took 20.148438728s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 23:58:59.442004   36356 node_conditions.go:102] verifying NodePressure condition ...
	I0428 23:58:59.610455   36356 request.go:629] Waited for 168.364577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes
	I0428 23:58:59.610505   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes
	I0428 23:58:59.610515   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:59.610522   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:59.610526   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:59.614523   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:59.615695   36356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 23:58:59.615718   36356 node_conditions.go:123] node cpu capacity is 2
	I0428 23:58:59.615731   36356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 23:58:59.615735   36356 node_conditions.go:123] node cpu capacity is 2
	I0428 23:58:59.615741   36356 node_conditions.go:105] duration metric: took 173.731434ms to run NodePressure ...
	I0428 23:58:59.615756   36356 start.go:240] waiting for startup goroutines ...
	I0428 23:58:59.615797   36356 start.go:254] writing updated cluster config ...
	I0428 23:58:59.617862   36356 out.go:177] 
	I0428 23:58:59.619360   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:58:59.619475   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:58:59.621275   36356 out.go:177] * Starting "ha-274394-m03" control-plane node in "ha-274394" cluster
	I0428 23:58:59.622428   36356 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:58:59.622455   36356 cache.go:56] Caching tarball of preloaded images
	I0428 23:58:59.622553   36356 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0428 23:58:59.622565   36356 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0428 23:58:59.622681   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:58:59.622874   36356 start.go:360] acquireMachinesLock for ha-274394-m03: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 23:58:59.622929   36356 start.go:364] duration metric: took 33.665µs to acquireMachinesLock for "ha-274394-m03"
	I0428 23:58:59.622950   36356 start.go:93] Provisioning new machine with config: &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:58:59.623064   36356 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0428 23:58:59.624667   36356 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 23:58:59.624758   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:58:59.624802   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:58:59.641214   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42821
	I0428 23:58:59.641727   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:58:59.642309   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:58:59.642334   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:58:59.642611   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:58:59.642804   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetMachineName
	I0428 23:58:59.642927   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:58:59.643066   36356 start.go:159] libmachine.API.Create for "ha-274394" (driver="kvm2")
	I0428 23:58:59.643091   36356 client.go:168] LocalClient.Create starting
	I0428 23:58:59.643121   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem
	I0428 23:58:59.643154   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:58:59.643179   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:58:59.643227   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem
	I0428 23:58:59.643249   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:58:59.643260   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:58:59.643281   36356 main.go:141] libmachine: Running pre-create checks...
	I0428 23:58:59.643296   36356 main.go:141] libmachine: (ha-274394-m03) Calling .PreCreateCheck
	I0428 23:58:59.643479   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetConfigRaw
	I0428 23:58:59.643879   36356 main.go:141] libmachine: Creating machine...
	I0428 23:58:59.643892   36356 main.go:141] libmachine: (ha-274394-m03) Calling .Create
	I0428 23:58:59.644001   36356 main.go:141] libmachine: (ha-274394-m03) Creating KVM machine...
	I0428 23:58:59.645183   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found existing default KVM network
	I0428 23:58:59.645266   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found existing private KVM network mk-ha-274394
	I0428 23:58:59.645383   36356 main.go:141] libmachine: (ha-274394-m03) Setting up store path in /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03 ...
	I0428 23:58:59.645406   36356 main.go:141] libmachine: (ha-274394-m03) Building disk image from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0428 23:58:59.645459   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:58:59.645378   37169 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:58:59.645569   36356 main.go:141] libmachine: (ha-274394-m03) Downloading /home/jenkins/minikube-integration/17977-13393/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 23:58:59.868035   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:58:59.867896   37169 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa...
	I0428 23:58:59.956656   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:58:59.956555   37169 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/ha-274394-m03.rawdisk...
	I0428 23:58:59.956683   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Writing magic tar header
	I0428 23:58:59.956697   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Writing SSH key tar header
	I0428 23:58:59.956708   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:58:59.956666   37169 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03 ...
	I0428 23:58:59.956777   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03
	I0428 23:58:59.956822   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines
	I0428 23:58:59.956840   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03 (perms=drwx------)
	I0428 23:58:59.956859   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines (perms=drwxr-xr-x)
	I0428 23:58:59.956873   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube (perms=drwxr-xr-x)
	I0428 23:58:59.956887   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393 (perms=drwxrwxr-x)
	I0428 23:58:59.956902   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:58:59.956914   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0428 23:58:59.956933   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0428 23:58:59.956960   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393
	I0428 23:58:59.956971   36356 main.go:141] libmachine: (ha-274394-m03) Creating domain...
	I0428 23:58:59.956990   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0428 23:58:59.957007   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins
	I0428 23:58:59.957021   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home
	I0428 23:58:59.957038   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Skipping /home - not owner
	I0428 23:58:59.957806   36356 main.go:141] libmachine: (ha-274394-m03) define libvirt domain using xml: 
	I0428 23:58:59.957828   36356 main.go:141] libmachine: (ha-274394-m03) <domain type='kvm'>
	I0428 23:58:59.957838   36356 main.go:141] libmachine: (ha-274394-m03)   <name>ha-274394-m03</name>
	I0428 23:58:59.957853   36356 main.go:141] libmachine: (ha-274394-m03)   <memory unit='MiB'>2200</memory>
	I0428 23:58:59.957866   36356 main.go:141] libmachine: (ha-274394-m03)   <vcpu>2</vcpu>
	I0428 23:58:59.957877   36356 main.go:141] libmachine: (ha-274394-m03)   <features>
	I0428 23:58:59.957887   36356 main.go:141] libmachine: (ha-274394-m03)     <acpi/>
	I0428 23:58:59.957898   36356 main.go:141] libmachine: (ha-274394-m03)     <apic/>
	I0428 23:58:59.957909   36356 main.go:141] libmachine: (ha-274394-m03)     <pae/>
	I0428 23:58:59.957920   36356 main.go:141] libmachine: (ha-274394-m03)     
	I0428 23:58:59.957929   36356 main.go:141] libmachine: (ha-274394-m03)   </features>
	I0428 23:58:59.957941   36356 main.go:141] libmachine: (ha-274394-m03)   <cpu mode='host-passthrough'>
	I0428 23:58:59.957968   36356 main.go:141] libmachine: (ha-274394-m03)   
	I0428 23:58:59.957989   36356 main.go:141] libmachine: (ha-274394-m03)   </cpu>
	I0428 23:58:59.958001   36356 main.go:141] libmachine: (ha-274394-m03)   <os>
	I0428 23:58:59.958017   36356 main.go:141] libmachine: (ha-274394-m03)     <type>hvm</type>
	I0428 23:58:59.958046   36356 main.go:141] libmachine: (ha-274394-m03)     <boot dev='cdrom'/>
	I0428 23:58:59.958059   36356 main.go:141] libmachine: (ha-274394-m03)     <boot dev='hd'/>
	I0428 23:58:59.958069   36356 main.go:141] libmachine: (ha-274394-m03)     <bootmenu enable='no'/>
	I0428 23:58:59.958080   36356 main.go:141] libmachine: (ha-274394-m03)   </os>
	I0428 23:58:59.958092   36356 main.go:141] libmachine: (ha-274394-m03)   <devices>
	I0428 23:58:59.958105   36356 main.go:141] libmachine: (ha-274394-m03)     <disk type='file' device='cdrom'>
	I0428 23:58:59.958119   36356 main.go:141] libmachine: (ha-274394-m03)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/boot2docker.iso'/>
	I0428 23:58:59.958132   36356 main.go:141] libmachine: (ha-274394-m03)       <target dev='hdc' bus='scsi'/>
	I0428 23:58:59.958144   36356 main.go:141] libmachine: (ha-274394-m03)       <readonly/>
	I0428 23:58:59.958155   36356 main.go:141] libmachine: (ha-274394-m03)     </disk>
	I0428 23:58:59.958169   36356 main.go:141] libmachine: (ha-274394-m03)     <disk type='file' device='disk'>
	I0428 23:58:59.958187   36356 main.go:141] libmachine: (ha-274394-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0428 23:58:59.958206   36356 main.go:141] libmachine: (ha-274394-m03)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/ha-274394-m03.rawdisk'/>
	I0428 23:58:59.958218   36356 main.go:141] libmachine: (ha-274394-m03)       <target dev='hda' bus='virtio'/>
	I0428 23:58:59.958230   36356 main.go:141] libmachine: (ha-274394-m03)     </disk>
	I0428 23:58:59.958242   36356 main.go:141] libmachine: (ha-274394-m03)     <interface type='network'>
	I0428 23:58:59.958274   36356 main.go:141] libmachine: (ha-274394-m03)       <source network='mk-ha-274394'/>
	I0428 23:58:59.958300   36356 main.go:141] libmachine: (ha-274394-m03)       <model type='virtio'/>
	I0428 23:58:59.958313   36356 main.go:141] libmachine: (ha-274394-m03)     </interface>
	I0428 23:58:59.958329   36356 main.go:141] libmachine: (ha-274394-m03)     <interface type='network'>
	I0428 23:58:59.958342   36356 main.go:141] libmachine: (ha-274394-m03)       <source network='default'/>
	I0428 23:58:59.958350   36356 main.go:141] libmachine: (ha-274394-m03)       <model type='virtio'/>
	I0428 23:58:59.958363   36356 main.go:141] libmachine: (ha-274394-m03)     </interface>
	I0428 23:58:59.958371   36356 main.go:141] libmachine: (ha-274394-m03)     <serial type='pty'>
	I0428 23:58:59.958382   36356 main.go:141] libmachine: (ha-274394-m03)       <target port='0'/>
	I0428 23:58:59.958390   36356 main.go:141] libmachine: (ha-274394-m03)     </serial>
	I0428 23:58:59.958401   36356 main.go:141] libmachine: (ha-274394-m03)     <console type='pty'>
	I0428 23:58:59.958417   36356 main.go:141] libmachine: (ha-274394-m03)       <target type='serial' port='0'/>
	I0428 23:58:59.958433   36356 main.go:141] libmachine: (ha-274394-m03)     </console>
	I0428 23:58:59.958450   36356 main.go:141] libmachine: (ha-274394-m03)     <rng model='virtio'>
	I0428 23:58:59.958464   36356 main.go:141] libmachine: (ha-274394-m03)       <backend model='random'>/dev/random</backend>
	I0428 23:58:59.958474   36356 main.go:141] libmachine: (ha-274394-m03)     </rng>
	I0428 23:58:59.958483   36356 main.go:141] libmachine: (ha-274394-m03)     
	I0428 23:58:59.958497   36356 main.go:141] libmachine: (ha-274394-m03)     
	I0428 23:58:59.958508   36356 main.go:141] libmachine: (ha-274394-m03)   </devices>
	I0428 23:58:59.958517   36356 main.go:141] libmachine: (ha-274394-m03) </domain>
	I0428 23:58:59.958532   36356 main.go:141] libmachine: (ha-274394-m03) 
	I0428 23:58:59.965013   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:ba:70:2d in network default
	I0428 23:58:59.965465   36356 main.go:141] libmachine: (ha-274394-m03) Ensuring networks are active...
	I0428 23:58:59.965490   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:58:59.966174   36356 main.go:141] libmachine: (ha-274394-m03) Ensuring network default is active
	I0428 23:58:59.966465   36356 main.go:141] libmachine: (ha-274394-m03) Ensuring network mk-ha-274394 is active
	I0428 23:58:59.966765   36356 main.go:141] libmachine: (ha-274394-m03) Getting domain xml...
	I0428 23:58:59.967422   36356 main.go:141] libmachine: (ha-274394-m03) Creating domain...
	I0428 23:59:01.202748   36356 main.go:141] libmachine: (ha-274394-m03) Waiting to get IP...
	I0428 23:59:01.203443   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:01.203897   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:01.203938   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:01.203872   37169 retry.go:31] will retry after 282.787142ms: waiting for machine to come up
	I0428 23:59:01.488289   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:01.488845   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:01.488880   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:01.488821   37169 retry.go:31] will retry after 311.074996ms: waiting for machine to come up
	I0428 23:59:01.801101   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:01.801590   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:01.801615   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:01.801538   37169 retry.go:31] will retry after 333.347197ms: waiting for machine to come up
	I0428 23:59:02.136222   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:02.136685   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:02.136722   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:02.136662   37169 retry.go:31] will retry after 515.127499ms: waiting for machine to come up
	I0428 23:59:02.652873   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:02.653262   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:02.653290   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:02.653217   37169 retry.go:31] will retry after 472.600429ms: waiting for machine to come up
	I0428 23:59:03.127829   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:03.128260   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:03.128285   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:03.128216   37169 retry.go:31] will retry after 918.328461ms: waiting for machine to come up
	I0428 23:59:04.047989   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:04.048469   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:04.048501   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:04.048401   37169 retry.go:31] will retry after 1.054046887s: waiting for machine to come up
	I0428 23:59:05.104188   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:05.104616   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:05.104654   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:05.104563   37169 retry.go:31] will retry after 1.317728284s: waiting for machine to come up
	I0428 23:59:06.424099   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:06.424567   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:06.424603   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:06.424502   37169 retry.go:31] will retry after 1.54429179s: waiting for machine to come up
	I0428 23:59:07.971097   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:07.971619   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:07.971640   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:07.971572   37169 retry.go:31] will retry after 1.943348331s: waiting for machine to come up
	I0428 23:59:09.916650   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:09.917110   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:09.917138   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:09.917059   37169 retry.go:31] will retry after 2.643143471s: waiting for machine to come up
	I0428 23:59:12.563295   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:12.563756   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:12.563783   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:12.563719   37169 retry.go:31] will retry after 3.420586328s: waiting for machine to come up
	I0428 23:59:15.986099   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:15.986542   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:15.986573   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:15.986487   37169 retry.go:31] will retry after 3.581143816s: waiting for machine to come up
	I0428 23:59:19.571466   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:19.571889   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:19.571918   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:19.571850   37169 retry.go:31] will retry after 5.55088001s: waiting for machine to come up
	I0428 23:59:25.124118   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:25.124562   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has current primary IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:25.124584   36356 main.go:141] libmachine: (ha-274394-m03) Found IP for machine: 192.168.39.250
	I0428 23:59:25.124598   36356 main.go:141] libmachine: (ha-274394-m03) Reserving static IP address...
	I0428 23:59:25.124921   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find host DHCP lease matching {name: "ha-274394-m03", mac: "52:54:00:0d:4c:dd", ip: "192.168.39.250"} in network mk-ha-274394
	I0428 23:59:25.197142   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Getting to WaitForSSH function...
	I0428 23:59:25.197166   36356 main.go:141] libmachine: (ha-274394-m03) Reserved static IP address: 192.168.39.250
	I0428 23:59:25.197212   36356 main.go:141] libmachine: (ha-274394-m03) Waiting for SSH to be available...
	I0428 23:59:25.199898   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:25.200254   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394
	I0428 23:59:25.200280   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find defined IP address of network mk-ha-274394 interface with MAC address 52:54:00:0d:4c:dd
	I0428 23:59:25.200392   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Using SSH client type: external
	I0428 23:59:25.200415   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa (-rw-------)
	I0428 23:59:25.200454   36356 main.go:141] libmachine: (ha-274394-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0428 23:59:25.200480   36356 main.go:141] libmachine: (ha-274394-m03) DBG | About to run SSH command:
	I0428 23:59:25.200505   36356 main.go:141] libmachine: (ha-274394-m03) DBG | exit 0
	I0428 23:59:25.204174   36356 main.go:141] libmachine: (ha-274394-m03) DBG | SSH cmd err, output: exit status 255: 
	I0428 23:59:25.204192   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0428 23:59:25.204202   36356 main.go:141] libmachine: (ha-274394-m03) DBG | command : exit 0
	I0428 23:59:25.204210   36356 main.go:141] libmachine: (ha-274394-m03) DBG | err     : exit status 255
	I0428 23:59:25.204221   36356 main.go:141] libmachine: (ha-274394-m03) DBG | output  : 
	I0428 23:59:28.206195   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Getting to WaitForSSH function...
	I0428 23:59:28.209965   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.210449   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.210480   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.210638   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Using SSH client type: external
	I0428 23:59:28.210667   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa (-rw-------)
	I0428 23:59:28.210707   36356 main.go:141] libmachine: (ha-274394-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0428 23:59:28.210727   36356 main.go:141] libmachine: (ha-274394-m03) DBG | About to run SSH command:
	I0428 23:59:28.210742   36356 main.go:141] libmachine: (ha-274394-m03) DBG | exit 0
	I0428 23:59:28.338185   36356 main.go:141] libmachine: (ha-274394-m03) DBG | SSH cmd err, output: <nil>: 
	I0428 23:59:28.338430   36356 main.go:141] libmachine: (ha-274394-m03) KVM machine creation complete!
	I0428 23:59:28.338791   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetConfigRaw
	I0428 23:59:28.339377   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:28.339584   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:28.339791   36356 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0428 23:59:28.339811   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetState
	I0428 23:59:28.341407   36356 main.go:141] libmachine: Detecting operating system of created instance...
	I0428 23:59:28.341426   36356 main.go:141] libmachine: Waiting for SSH to be available...
	I0428 23:59:28.341433   36356 main.go:141] libmachine: Getting to WaitForSSH function...
	I0428 23:59:28.341441   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:28.343848   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.344223   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.344248   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.344376   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:28.344530   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.344668   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.344809   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:28.344963   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:28.345166   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:28.345177   36356 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0428 23:59:28.457369   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:59:28.457393   36356 main.go:141] libmachine: Detecting the provisioner...
	I0428 23:59:28.457401   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:28.459831   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.460234   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.460254   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.460462   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:28.460635   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.460795   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.460929   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:28.461110   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:28.461319   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:28.461334   36356 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0428 23:59:28.575513   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0428 23:59:28.575577   36356 main.go:141] libmachine: found compatible host: buildroot
	I0428 23:59:28.575591   36356 main.go:141] libmachine: Provisioning with buildroot...
	I0428 23:59:28.575599   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetMachineName
	I0428 23:59:28.575836   36356 buildroot.go:166] provisioning hostname "ha-274394-m03"
	I0428 23:59:28.575863   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetMachineName
	I0428 23:59:28.576068   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:28.578532   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.578931   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.578960   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.579060   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:28.579211   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.579335   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.579444   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:28.579637   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:28.579820   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:28.579837   36356 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-274394-m03 && echo "ha-274394-m03" | sudo tee /etc/hostname
	I0428 23:59:28.712688   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-274394-m03
	
	I0428 23:59:28.712717   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:28.715733   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.716152   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.716191   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.716417   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:28.716624   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.716814   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.716966   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:28.717155   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:28.717357   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:28.717380   36356 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-274394-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-274394-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-274394-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 23:59:28.841508   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:59:28.841559   36356 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0428 23:59:28.841576   36356 buildroot.go:174] setting up certificates
	I0428 23:59:28.841586   36356 provision.go:84] configureAuth start
	I0428 23:59:28.841595   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetMachineName
	I0428 23:59:28.841879   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0428 23:59:28.845193   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.845548   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.845578   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.845693   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:28.847976   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.848368   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.848393   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.848514   36356 provision.go:143] copyHostCerts
	I0428 23:59:28.848537   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:59:28.848565   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0428 23:59:28.848573   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:59:28.848635   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0428 23:59:28.848714   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:59:28.848732   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0428 23:59:28.848739   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:59:28.848762   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0428 23:59:28.848811   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:59:28.848827   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0428 23:59:28.848833   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:59:28.848853   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0428 23:59:28.848903   36356 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.ha-274394-m03 san=[127.0.0.1 192.168.39.250 ha-274394-m03 localhost minikube]
	I0428 23:59:29.012952   36356 provision.go:177] copyRemoteCerts
	I0428 23:59:29.013023   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 23:59:29.013055   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:29.015566   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.015904   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.015935   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.016127   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.016358   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.016550   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.016710   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0428 23:59:29.109376   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0428 23:59:29.109447   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 23:59:29.140078   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0428 23:59:29.140132   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 23:59:29.170421   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0428 23:59:29.170498   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 23:59:29.196503   36356 provision.go:87] duration metric: took 354.905712ms to configureAuth
	I0428 23:59:29.196530   36356 buildroot.go:189] setting minikube options for container-runtime
	I0428 23:59:29.196783   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:59:29.196853   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:29.199543   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.199885   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.199907   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.200083   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.200254   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.200404   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.200525   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.200690   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:29.200838   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:29.200853   36356 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0428 23:59:29.503246   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0428 23:59:29.503276   36356 main.go:141] libmachine: Checking connection to Docker...
	I0428 23:59:29.503287   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetURL
	I0428 23:59:29.504495   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Using libvirt version 6000000
	I0428 23:59:29.506850   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.507214   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.507241   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.507419   36356 main.go:141] libmachine: Docker is up and running!
	I0428 23:59:29.507439   36356 main.go:141] libmachine: Reticulating splines...
	I0428 23:59:29.507446   36356 client.go:171] duration metric: took 29.864346558s to LocalClient.Create
	I0428 23:59:29.507469   36356 start.go:167] duration metric: took 29.864403952s to libmachine.API.Create "ha-274394"
	I0428 23:59:29.507478   36356 start.go:293] postStartSetup for "ha-274394-m03" (driver="kvm2")
	I0428 23:59:29.507488   36356 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 23:59:29.507509   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:29.507729   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 23:59:29.507746   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:29.510131   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.510522   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.510563   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.510678   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.510845   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.511001   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.511156   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0428 23:59:29.596901   36356 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 23:59:29.601706   36356 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 23:59:29.601727   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0428 23:59:29.601789   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0428 23:59:29.601886   36356 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0428 23:59:29.601896   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /etc/ssl/certs/207272.pem
	I0428 23:59:29.602001   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 23:59:29.612235   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:59:29.639858   36356 start.go:296] duration metric: took 132.371288ms for postStartSetup
	I0428 23:59:29.639898   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetConfigRaw
	I0428 23:59:29.640442   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0428 23:59:29.643445   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.643832   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.643857   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.644181   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:59:29.644406   36356 start.go:128] duration metric: took 30.021329967s to createHost
	I0428 23:59:29.644432   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:29.646565   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.646909   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.646939   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.647052   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.647200   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.647366   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.647477   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.647640   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:29.647806   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:29.647818   36356 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 23:59:29.767512   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714348769.755757207
	
	I0428 23:59:29.767540   36356 fix.go:216] guest clock: 1714348769.755757207
	I0428 23:59:29.767552   36356 fix.go:229] Guest: 2024-04-28 23:59:29.755757207 +0000 UTC Remote: 2024-04-28 23:59:29.644418148 +0000 UTC m=+165.090306589 (delta=111.339059ms)
	I0428 23:59:29.767569   36356 fix.go:200] guest clock delta is within tolerance: 111.339059ms
	I0428 23:59:29.767575   36356 start.go:83] releasing machines lock for "ha-274394-m03", held for 30.144638005s
	I0428 23:59:29.767599   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:29.767844   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0428 23:59:29.770233   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.770627   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.770658   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.772993   36356 out.go:177] * Found network options:
	I0428 23:59:29.774437   36356 out.go:177]   - NO_PROXY=192.168.39.237,192.168.39.43
	W0428 23:59:29.775869   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 23:59:29.775892   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 23:59:29.775908   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:29.776440   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:29.776628   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:29.776720   36356 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 23:59:29.776749   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	W0428 23:59:29.776986   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 23:59:29.777012   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 23:59:29.777072   36356 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0428 23:59:29.777091   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:29.779588   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.779789   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.780023   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.780062   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.780288   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.780325   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.780341   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.780487   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.780497   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.780688   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.780693   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.780882   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.780886   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0428 23:59:29.781047   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0428 23:59:30.022766   36356 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 23:59:30.029806   36356 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 23:59:30.029872   36356 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 23:59:30.049513   36356 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 23:59:30.049537   36356 start.go:494] detecting cgroup driver to use...
	I0428 23:59:30.049602   36356 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 23:59:30.067833   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 23:59:30.084419   36356 docker.go:217] disabling cri-docker service (if available) ...
	I0428 23:59:30.084490   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0428 23:59:30.101260   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0428 23:59:30.118454   36356 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0428 23:59:30.245117   36356 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0428 23:59:30.402173   36356 docker.go:233] disabling docker service ...
	I0428 23:59:30.402240   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0428 23:59:30.419742   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0428 23:59:30.434799   36356 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0428 23:59:30.586310   36356 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0428 23:59:30.701297   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0428 23:59:30.717873   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 23:59:30.740576   36356 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0428 23:59:30.740637   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.755747   36356 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0428 23:59:30.755821   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.769519   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.783158   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.800160   36356 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 23:59:30.812526   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.824663   36356 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.845871   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.858527   36356 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 23:59:30.871070   36356 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0428 23:59:30.871116   36356 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0428 23:59:30.892560   36356 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 23:59:30.906892   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:59:31.047857   36356 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0428 23:59:31.608180   36356 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0428 23:59:31.608258   36356 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0428 23:59:31.613650   36356 start.go:562] Will wait 60s for crictl version
	I0428 23:59:31.613712   36356 ssh_runner.go:195] Run: which crictl
	I0428 23:59:31.618572   36356 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 23:59:31.667744   36356 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0428 23:59:31.667841   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:59:31.698887   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:59:31.732978   36356 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0428 23:59:31.734467   36356 out.go:177]   - env NO_PROXY=192.168.39.237
	I0428 23:59:31.735737   36356 out.go:177]   - env NO_PROXY=192.168.39.237,192.168.39.43
	I0428 23:59:31.736997   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0428 23:59:31.739814   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:31.740186   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:31.740216   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:31.740374   36356 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0428 23:59:31.745539   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:59:31.759169   36356 mustload.go:65] Loading cluster: ha-274394
	I0428 23:59:31.759367   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:59:31.759592   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:59:31.759625   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:59:31.774099   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36133
	I0428 23:59:31.774493   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:59:31.774982   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:59:31.775008   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:59:31.775303   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:59:31.775488   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:59:31.777010   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:59:31.777277   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:59:31.777308   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:59:31.791488   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0428 23:59:31.791874   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:59:31.792798   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:59:31.792816   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:59:31.793108   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:59:31.793289   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:59:31.793448   36356 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394 for IP: 192.168.39.250
	I0428 23:59:31.793462   36356 certs.go:194] generating shared ca certs ...
	I0428 23:59:31.793482   36356 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:59:31.793619   36356 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0428 23:59:31.793657   36356 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0428 23:59:31.793665   36356 certs.go:256] generating profile certs ...
	I0428 23:59:31.793730   36356 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key
	I0428 23:59:31.793754   36356 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.293e4005
	I0428 23:59:31.793767   36356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.293e4005 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237 192.168.39.43 192.168.39.250 192.168.39.254]
	I0428 23:59:31.935877   36356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.293e4005 ...
	I0428 23:59:31.935910   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.293e4005: {Name:mkb1d55f40172ee8436492fe8f68a99e68fc03c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:59:31.936096   36356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.293e4005 ...
	I0428 23:59:31.936114   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.293e4005: {Name:mka939da220f505a93b36da1922b3c1aa6b40303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:59:31.936219   36356 certs.go:381] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.293e4005 -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt
	I0428 23:59:31.936357   36356 certs.go:385] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.293e4005 -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key
	I0428 23:59:31.936484   36356 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key
	I0428 23:59:31.936501   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 23:59:31.936513   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0428 23:59:31.936526   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 23:59:31.936537   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 23:59:31.936547   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 23:59:31.936557   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 23:59:31.936567   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 23:59:31.936577   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 23:59:31.936618   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0428 23:59:31.936644   36356 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0428 23:59:31.936653   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0428 23:59:31.936674   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0428 23:59:31.936698   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0428 23:59:31.936718   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0428 23:59:31.936753   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:59:31.936778   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /usr/share/ca-certificates/207272.pem
	I0428 23:59:31.936791   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:59:31.936803   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem -> /usr/share/ca-certificates/20727.pem
	I0428 23:59:31.936841   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:59:31.939693   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:59:31.940099   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:59:31.940126   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:59:31.940327   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:59:31.940489   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:59:31.940610   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:59:31.940730   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:59:32.014500   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0428 23:59:32.019823   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0428 23:59:32.035881   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0428 23:59:32.040779   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0428 23:59:32.057077   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0428 23:59:32.062345   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0428 23:59:32.076399   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0428 23:59:32.083489   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0428 23:59:32.100707   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0428 23:59:32.105612   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0428 23:59:32.119597   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0428 23:59:32.124604   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0428 23:59:32.138154   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 23:59:32.170086   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0428 23:59:32.198963   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 23:59:32.226907   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 23:59:32.253636   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0428 23:59:32.278554   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 23:59:32.304986   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 23:59:32.331547   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0428 23:59:32.359245   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0428 23:59:32.388215   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 23:59:32.415247   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0428 23:59:32.442937   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0428 23:59:32.462088   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0428 23:59:32.485219   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0428 23:59:32.505466   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0428 23:59:32.524971   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0428 23:59:32.543452   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0428 23:59:32.561866   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0428 23:59:32.580866   36356 ssh_runner.go:195] Run: openssl version
	I0428 23:59:32.587281   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0428 23:59:32.599497   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0428 23:59:32.604579   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0428 23:59:32.604620   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0428 23:59:32.610534   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 23:59:32.622113   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 23:59:32.633898   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:59:32.639105   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:59:32.639152   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:59:32.645086   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 23:59:32.656327   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0428 23:59:32.667538   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0428 23:59:32.672551   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0428 23:59:32.672585   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0428 23:59:32.679007   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0428 23:59:32.691035   36356 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 23:59:32.695662   36356 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 23:59:32.695716   36356 kubeadm.go:928] updating node {m03 192.168.39.250 8443 v1.30.0 crio true true} ...
	I0428 23:59:32.695808   36356 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-274394-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 23:59:32.695835   36356 kube-vip.go:111] generating kube-vip config ...
	I0428 23:59:32.695872   36356 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 23:59:32.712399   36356 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 23:59:32.712452   36356 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0428 23:59:32.712493   36356 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 23:59:32.722354   36356 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0428 23:59:32.722390   36356 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0428 23:59:32.733638   36356 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0428 23:59:32.733661   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0428 23:59:32.733670   36356 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0428 23:59:32.733674   36356 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0428 23:59:32.733720   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 23:59:32.733727   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0428 23:59:32.733688   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0428 23:59:32.733893   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0428 23:59:32.743978   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0428 23:59:32.744010   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0428 23:59:32.754412   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0428 23:59:32.754479   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0428 23:59:32.754511   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0428 23:59:32.754529   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0428 23:59:32.811837   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0428 23:59:32.811888   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0428 23:59:33.742700   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0428 23:59:33.754911   36356 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0428 23:59:33.775851   36356 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 23:59:33.794540   36356 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0428 23:59:33.812285   36356 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0428 23:59:33.816564   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:59:33.831086   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:59:33.959822   36356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:59:33.979990   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:59:33.980339   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:59:33.980381   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:59:33.995673   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0428 23:59:33.996168   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:59:33.996792   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:59:33.996826   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:59:33.997145   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:59:33.997356   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:59:33.997488   36356 start.go:316] joinCluster: &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:59:33.997595   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0428 23:59:33.997609   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:59:34.000823   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:59:34.001251   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:59:34.001282   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:59:34.001440   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:59:34.001603   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:59:34.001747   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:59:34.001869   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:59:34.163965   36356 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:59:34.164013   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xjl1hv.qr8jswflfz5d4crm --discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-274394-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443"
	I0428 23:59:57.753188   36356 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xjl1hv.qr8jswflfz5d4crm --discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-274394-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443": (23.589146569s)
	I0428 23:59:57.753234   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0428 23:59:58.433673   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-274394-m03 minikube.k8s.io/updated_at=2024_04_28T23_59_58_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-274394 minikube.k8s.io/primary=false
	I0428 23:59:58.575531   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-274394-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0428 23:59:58.699252   36356 start.go:318] duration metric: took 24.701760658s to joinCluster
	I0428 23:59:58.699325   36356 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:59:58.700772   36356 out.go:177] * Verifying Kubernetes components...
	I0428 23:59:58.699710   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:59:58.702011   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:59:58.992947   36356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:59:59.039022   36356 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:59:59.039347   36356 kapi.go:59] client config for ha-274394: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt", KeyFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key", CAFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0428 23:59:59.039423   36356 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.237:8443
	I0428 23:59:59.039662   36356 node_ready.go:35] waiting up to 6m0s for node "ha-274394-m03" to be "Ready" ...
	I0428 23:59:59.039739   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0428 23:59:59.039749   36356 round_trippers.go:469] Request Headers:
	I0428 23:59:59.039761   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:59:59.039769   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:59:59.044215   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:59:59.540563   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0428 23:59:59.540595   36356 round_trippers.go:469] Request Headers:
	I0428 23:59:59.540605   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:59:59.540611   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:59:59.545042   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:00.040363   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:00.040387   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:00.040395   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:00.040399   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:00.052445   36356 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 00:00:00.540532   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:00.540560   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:00.540570   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:00.540575   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:00.544839   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:01.040082   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:01.040105   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:01.040113   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:01.040116   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:01.043721   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:01.044695   36356 node_ready.go:53] node "ha-274394-m03" has status "Ready":"False"
	I0429 00:00:01.540152   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:01.540175   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:01.540182   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:01.540185   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:01.544113   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:02.040892   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:02.040915   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:02.040926   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:02.040933   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:02.045818   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:02.539953   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:02.539976   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:02.539983   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:02.539988   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:02.544199   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:03.040237   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:03.040265   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:03.040276   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:03.040282   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:03.045325   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:03.046199   36356 node_ready.go:53] node "ha-274394-m03" has status "Ready":"False"
	I0429 00:00:03.540612   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:03.540637   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:03.540650   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:03.540654   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:03.545488   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:04.040836   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:04.040868   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:04.040887   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:04.040895   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:04.046280   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:04.540501   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:04.540527   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:04.540544   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:04.540551   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:04.544986   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:05.040393   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:05.040420   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:05.040429   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:05.040437   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:05.045045   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:05.046512   36356 node_ready.go:53] node "ha-274394-m03" has status "Ready":"False"
	I0429 00:00:05.540289   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:05.540310   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:05.540316   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:05.540320   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:05.545290   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:06.040177   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:06.040197   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:06.040203   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:06.040207   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:06.051024   36356 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 00:00:06.540102   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:06.540136   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:06.540144   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:06.540148   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:06.544263   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:07.039962   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:07.039982   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.039990   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.039995   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.044076   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:07.540106   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:07.540132   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.540145   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.540151   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.543779   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.544580   36356 node_ready.go:49] node "ha-274394-m03" has status "Ready":"True"
	I0429 00:00:07.544602   36356 node_ready.go:38] duration metric: took 8.504923556s for node "ha-274394-m03" to be "Ready" ...
	I0429 00:00:07.544611   36356 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 00:00:07.544667   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0429 00:00:07.544677   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.544684   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.544687   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.553653   36356 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 00:00:07.561698   36356 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.561817   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rslhx
	I0429 00:00:07.561827   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.561838   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.561847   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.565506   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.566518   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:07.566534   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.566540   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.566545   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.569905   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.570543   36356 pod_ready.go:92] pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:07.570562   36356 pod_ready.go:81] duration metric: took 8.831944ms for pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.570571   36356 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.570622   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xkdcv
	I0429 00:00:07.570630   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.570636   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.570640   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.574999   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:07.576125   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:07.576146   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.576157   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.576161   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.580599   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:07.581239   36356 pod_ready.go:92] pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:07.581256   36356 pod_ready.go:81] duration metric: took 10.67917ms for pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.581274   36356 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.581333   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394
	I0429 00:00:07.581342   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.581394   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.581408   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.589396   36356 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 00:00:07.590368   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:07.590389   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.590396   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.590401   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.593754   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.594461   36356 pod_ready.go:92] pod "etcd-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:07.594479   36356 pod_ready.go:81] duration metric: took 13.196822ms for pod "etcd-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.594491   36356 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.594550   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0429 00:00:07.594561   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.594571   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.594579   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.598205   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.598968   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:07.598989   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.598997   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.599003   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.602493   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.603262   36356 pod_ready.go:92] pod "etcd-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:07.603287   36356 pod_ready.go:81] duration metric: took 8.787518ms for pod "etcd-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.603300   36356 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.740735   36356 request.go:629] Waited for 137.335456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:07.740793   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:07.740799   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.740806   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.740810   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.744904   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:07.940222   36356 request.go:629] Waited for 194.103628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:07.940293   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:07.940300   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.940311   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.940320   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.944388   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:08.140733   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:08.140759   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:08.140768   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:08.140772   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:08.146200   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:08.340435   36356 request.go:629] Waited for 193.229978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:08.340527   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:08.340535   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:08.340548   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:08.340554   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:08.348074   36356 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 00:00:08.603651   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:08.603675   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:08.603684   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:08.603690   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:08.607841   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:08.740178   36356 request.go:629] Waited for 131.240032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:08.740243   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:08.740251   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:08.740262   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:08.740269   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:08.744106   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:09.104560   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:09.104580   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:09.104587   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:09.104593   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:09.109646   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:09.141091   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:09.141121   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:09.141133   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:09.141140   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:09.145148   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:09.604524   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:09.606661   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:09.606683   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:09.606690   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:09.612435   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:09.614907   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:09.614925   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:09.614932   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:09.614936   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:09.618703   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:09.619548   36356 pod_ready.go:102] pod "etcd-ha-274394-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 00:00:10.103536   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:10.103614   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:10.103630   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:10.103638   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:10.109798   36356 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 00:00:10.111485   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:10.111504   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:10.111515   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:10.111522   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:10.115064   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:10.604281   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:10.604310   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:10.604320   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:10.604325   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:10.609142   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:10.610464   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:10.610484   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:10.610495   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:10.610500   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:10.616566   36356 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 00:00:11.103607   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:11.103632   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:11.103642   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:11.103646   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:11.107835   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:11.108744   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:11.108761   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:11.108768   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:11.108772   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:11.111862   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:11.603444   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:11.603463   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:11.603470   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:11.603475   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:11.607641   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:11.608655   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:11.608670   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:11.608682   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:11.608686   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:11.612140   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:12.104123   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:12.104145   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:12.104152   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:12.104156   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:12.108430   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:12.109205   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:12.109221   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:12.109228   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:12.109234   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:12.113119   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:12.113849   36356 pod_ready.go:102] pod "etcd-ha-274394-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 00:00:12.604327   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:12.604352   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:12.604363   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:12.604367   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:12.609419   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:12.610396   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:12.610415   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:12.610424   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:12.610429   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:12.613931   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:13.103988   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:13.104013   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:13.104020   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:13.104024   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:13.108252   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:13.109306   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:13.109326   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:13.109336   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:13.109342   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:13.113607   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:13.603763   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:13.603785   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:13.603795   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:13.603800   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:13.608023   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:13.608911   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:13.608963   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:13.608978   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:13.608983   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:13.612695   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.104203   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:14.104224   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.104233   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.104238   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.110775   36356 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 00:00:14.111901   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:14.111922   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.111932   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.111937   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.124604   36356 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 00:00:14.125329   36356 pod_ready.go:102] pod "etcd-ha-274394-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 00:00:14.603835   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:14.605874   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.605892   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.605898   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.610827   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:14.611910   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:14.611925   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.611932   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.611936   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.616291   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:14.616947   36356 pod_ready.go:92] pod "etcd-ha-274394-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:14.616966   36356 pod_ready.go:81] duration metric: took 7.0136586s for pod "etcd-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.616984   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.617056   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-274394
	I0429 00:00:14.617065   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.617072   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.617077   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.620266   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.620999   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:14.621015   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.621022   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.621028   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.624239   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.624743   36356 pod_ready.go:92] pod "kube-apiserver-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:14.624766   36356 pod_ready.go:81] duration metric: took 7.774137ms for pod "kube-apiserver-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.624778   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.624845   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-274394-m02
	I0429 00:00:14.624856   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.624864   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.624868   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.628427   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.629157   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:14.629170   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.629177   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.629180   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.632260   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.632852   36356 pod_ready.go:92] pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:14.632874   36356 pod_ready.go:81] duration metric: took 8.087549ms for pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.632887   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.632958   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-274394-m03
	I0429 00:00:14.632969   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.632979   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.632988   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.636284   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.740734   36356 request.go:629] Waited for 103.678901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:14.740817   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:14.740831   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.740841   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.740846   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.773084   36356 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0429 00:00:14.773907   36356 pod_ready.go:92] pod "kube-apiserver-ha-274394-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:14.773924   36356 pod_ready.go:81] duration metric: took 141.027444ms for pod "kube-apiserver-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.773933   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.940281   36356 request.go:629] Waited for 166.264237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394
	I0429 00:00:14.940343   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394
	I0429 00:00:14.940349   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.940360   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.940365   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.944365   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:15.140345   36356 request.go:629] Waited for 195.163934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:15.140423   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:15.140431   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:15.140439   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:15.140444   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:15.144062   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:15.144848   36356 pod_ready.go:92] pod "kube-controller-manager-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:15.144866   36356 pod_ready.go:81] duration metric: took 370.926651ms for pod "kube-controller-manager-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:15.144875   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:15.340256   36356 request.go:629] Waited for 195.311171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394-m02
	I0429 00:00:15.340322   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394-m02
	I0429 00:00:15.340327   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:15.340355   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:15.340361   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:15.347923   36356 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 00:00:15.540947   36356 request.go:629] Waited for 191.397388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:15.541007   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:15.541019   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:15.541026   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:15.541034   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:15.545131   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:15.545894   36356 pod_ready.go:92] pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:15.545917   36356 pod_ready.go:81] duration metric: took 401.034522ms for pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:15.545930   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:15.740918   36356 request.go:629] Waited for 194.911345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394-m03
	I0429 00:00:15.741006   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394-m03
	I0429 00:00:15.741012   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:15.741021   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:15.741028   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:15.746377   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:15.940547   36356 request.go:629] Waited for 193.382908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:15.940633   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:15.940646   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:15.940656   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:15.940661   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:15.946477   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:15.947225   36356 pod_ready.go:92] pod "kube-controller-manager-ha-274394-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:15.947250   36356 pod_ready.go:81] duration metric: took 401.3069ms for pod "kube-controller-manager-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:15.947264   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4rb7k" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:16.140188   36356 request.go:629] Waited for 192.839853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rb7k
	I0429 00:00:16.140294   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rb7k
	I0429 00:00:16.140312   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:16.140324   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:16.140332   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:16.145774   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:16.340228   36356 request.go:629] Waited for 193.697798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:16.340310   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:16.340317   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:16.340329   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:16.340339   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:16.344612   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:16.345393   36356 pod_ready.go:92] pod "kube-proxy-4rb7k" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:16.345411   36356 pod_ready.go:81] duration metric: took 398.139664ms for pod "kube-proxy-4rb7k" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:16.345423   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g95c9" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:16.540629   36356 request.go:629] Waited for 195.13398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95c9
	I0429 00:00:16.540716   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95c9
	I0429 00:00:16.540728   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:16.540738   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:16.540747   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:16.545764   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:16.740862   36356 request.go:629] Waited for 194.341193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:16.740912   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:16.740917   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:16.740924   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:16.740928   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:16.744945   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:16.745580   36356 pod_ready.go:92] pod "kube-proxy-g95c9" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:16.745613   36356 pod_ready.go:81] duration metric: took 400.179822ms for pod "kube-proxy-g95c9" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:16.745629   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pwbfs" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:16.940209   36356 request.go:629] Waited for 194.512821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwbfs
	I0429 00:00:16.940295   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwbfs
	I0429 00:00:16.940311   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:16.940321   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:16.940332   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:16.944152   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:17.140530   36356 request.go:629] Waited for 195.395948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:17.140617   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:17.140631   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:17.140641   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:17.140648   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:17.146669   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:17.147419   36356 pod_ready.go:92] pod "kube-proxy-pwbfs" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:17.147443   36356 pod_ready.go:81] duration metric: took 401.8052ms for pod "kube-proxy-pwbfs" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:17.147454   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:17.340500   36356 request.go:629] Waited for 192.965416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394
	I0429 00:00:17.340634   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394
	I0429 00:00:17.340647   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:17.340655   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:17.340670   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:17.344716   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:17.540937   36356 request.go:629] Waited for 195.33873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:17.541019   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:17.541031   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:17.541042   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:17.541052   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:17.545973   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:17.546666   36356 pod_ready.go:92] pod "kube-scheduler-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:17.546694   36356 pod_ready.go:81] duration metric: took 399.233302ms for pod "kube-scheduler-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:17.546708   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:17.740796   36356 request.go:629] Waited for 193.996654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m02
	I0429 00:00:17.740856   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m02
	I0429 00:00:17.740862   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:17.740869   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:17.740872   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:17.745935   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:17.940323   36356 request.go:629] Waited for 193.289642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:17.940410   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:17.940419   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:17.940429   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:17.940440   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:17.945223   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:17.946163   36356 pod_ready.go:92] pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:17.946184   36356 pod_ready.go:81] duration metric: took 399.468104ms for pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:17.946193   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:18.140175   36356 request.go:629] Waited for 193.91684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m03
	I0429 00:00:18.140243   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m03
	I0429 00:00:18.140248   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.140276   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.140289   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.145196   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:18.340751   36356 request.go:629] Waited for 194.401601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:18.340842   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:18.340851   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.340862   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.340873   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.344942   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:18.345590   36356 pod_ready.go:92] pod "kube-scheduler-ha-274394-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:18.345611   36356 pod_ready.go:81] duration metric: took 399.411796ms for pod "kube-scheduler-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:18.345623   36356 pod_ready.go:38] duration metric: took 10.801003405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 00:00:18.345641   36356 api_server.go:52] waiting for apiserver process to appear ...
	I0429 00:00:18.345710   36356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:00:18.374589   36356 api_server.go:72] duration metric: took 19.675227263s to wait for apiserver process to appear ...
	I0429 00:00:18.374620   36356 api_server.go:88] waiting for apiserver healthz status ...
	I0429 00:00:18.374648   36356 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0429 00:00:18.379661   36356 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0429 00:00:18.379729   36356 round_trippers.go:463] GET https://192.168.39.237:8443/version
	I0429 00:00:18.379740   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.379754   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.379761   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.380937   36356 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 00:00:18.381043   36356 api_server.go:141] control plane version: v1.30.0
	I0429 00:00:18.381061   36356 api_server.go:131] duration metric: took 6.434791ms to wait for apiserver health ...
	I0429 00:00:18.381069   36356 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 00:00:18.540471   36356 request.go:629] Waited for 159.310973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0429 00:00:18.540534   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0429 00:00:18.540539   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.540546   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.540552   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.553803   36356 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 00:00:18.561130   36356 system_pods.go:59] 24 kube-system pods found
	I0429 00:00:18.561159   36356 system_pods.go:61] "coredns-7db6d8ff4d-rslhx" [b73501ce-7591-45a5-b59e-331f7752c71b] Running
	I0429 00:00:18.561164   36356 system_pods.go:61] "coredns-7db6d8ff4d-xkdcv" [60272694-edd8-4a8c-abd9-707cdb1355ea] Running
	I0429 00:00:18.561167   36356 system_pods.go:61] "etcd-ha-274394" [e951aad6-16ba-42de-bcb6-a90ec5388fc8] Running
	I0429 00:00:18.561171   36356 system_pods.go:61] "etcd-ha-274394-m02" [63565823-56bf-4bd7-b8da-604a1b0d4d30] Running
	I0429 00:00:18.561174   36356 system_pods.go:61] "etcd-ha-274394-m03" [64d0cf43-d3cd-4054-b44a-e8b4f8a70b06] Running
	I0429 00:00:18.561176   36356 system_pods.go:61] "kindnet-29qlf" [915875ab-c1aa-46d6-b5e1-b6a7eff8dd64] Running
	I0429 00:00:18.561179   36356 system_pods.go:61] "kindnet-6qf7q" [f00be25f-cefa-41ac-8c38-1d52f337e8b9] Running
	I0429 00:00:18.561182   36356 system_pods.go:61] "kindnet-p6qmw" [528219cb-5850-471c-97de-c31dcca693b1] Running
	I0429 00:00:18.561185   36356 system_pods.go:61] "kube-apiserver-ha-274394" [f20281d2-0f10-43b0-9a51-495d03b5a5c3] Running
	I0429 00:00:18.561188   36356 system_pods.go:61] "kube-apiserver-ha-274394-m02" [0f8b7b21-a990-447f-a3b8-6acdccf078d3] Running
	I0429 00:00:18.561191   36356 system_pods.go:61] "kube-apiserver-ha-274394-m03" [a9546d9d-7c2a-45c4-a0a5-a5efea4a04d9] Running
	I0429 00:00:18.561194   36356 system_pods.go:61] "kube-controller-manager-ha-274394" [8fb69743-3a7b-4fad-838c-a45e1667724c] Running
	I0429 00:00:18.561197   36356 system_pods.go:61] "kube-controller-manager-ha-274394-m02" [429f2ab6-9771-47b2-b827-d183897f6276] Running
	I0429 00:00:18.561200   36356 system_pods.go:61] "kube-controller-manager-ha-274394-m03" [f4094095-5c0c-4fb7-9c76-fb63e6c6eeb2] Running
	I0429 00:00:18.561203   36356 system_pods.go:61] "kube-proxy-4rb7k" [de261499-d4f2-44b0-869b-28ae3505f19f] Running
	I0429 00:00:18.561205   36356 system_pods.go:61] "kube-proxy-g95c9" [5be866d8-0014-44c7-a4cd-e93655e9c599] Running
	I0429 00:00:18.561209   36356 system_pods.go:61] "kube-proxy-pwbfs" [5303f947-6c3f-47b5-b396-33b92049d48f] Running
	I0429 00:00:18.561212   36356 system_pods.go:61] "kube-scheduler-ha-274394" [22d206f5-49cc-43d0-939e-249961518bb4] Running
	I0429 00:00:18.561214   36356 system_pods.go:61] "kube-scheduler-ha-274394-m02" [3371d359-adb1-4111-8ae1-44934bad24c3] Running
	I0429 00:00:18.561217   36356 system_pods.go:61] "kube-scheduler-ha-274394-m03" [7084f6de-4070-4d9b-b313-4b52f51123c7] Running
	I0429 00:00:18.561220   36356 system_pods.go:61] "kube-vip-ha-274394" [ce6151de-754a-4f15-94d4-71f4fb9cbd21] Running
	I0429 00:00:18.561222   36356 system_pods.go:61] "kube-vip-ha-274394-m02" [f276f128-37bf-4f93-a573-e6b491fa66cd] Running
	I0429 00:00:18.561225   36356 system_pods.go:61] "kube-vip-ha-274394-m03" [bd6c2740-2068-4849-a23b-56d9ce0ac21c] Running
	I0429 00:00:18.561227   36356 system_pods.go:61] "storage-provisioner" [b291d6ca-3a9b-4dd0-b0e9-a183347e7d26] Running
	I0429 00:00:18.561232   36356 system_pods.go:74] duration metric: took 180.158592ms to wait for pod list to return data ...
	I0429 00:00:18.561240   36356 default_sa.go:34] waiting for default service account to be created ...
	I0429 00:00:18.740670   36356 request.go:629] Waited for 179.356953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0429 00:00:18.740727   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0429 00:00:18.740739   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.740746   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.740750   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.745356   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:18.745474   36356 default_sa.go:45] found service account: "default"
	I0429 00:00:18.745492   36356 default_sa.go:55] duration metric: took 184.245419ms for default service account to be created ...
	I0429 00:00:18.745502   36356 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 00:00:18.940811   36356 request.go:629] Waited for 195.241974ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0429 00:00:18.940863   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0429 00:00:18.940868   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.940874   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.940886   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.950591   36356 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 00:00:18.957766   36356 system_pods.go:86] 24 kube-system pods found
	I0429 00:00:18.957805   36356 system_pods.go:89] "coredns-7db6d8ff4d-rslhx" [b73501ce-7591-45a5-b59e-331f7752c71b] Running
	I0429 00:00:18.957815   36356 system_pods.go:89] "coredns-7db6d8ff4d-xkdcv" [60272694-edd8-4a8c-abd9-707cdb1355ea] Running
	I0429 00:00:18.957821   36356 system_pods.go:89] "etcd-ha-274394" [e951aad6-16ba-42de-bcb6-a90ec5388fc8] Running
	I0429 00:00:18.957830   36356 system_pods.go:89] "etcd-ha-274394-m02" [63565823-56bf-4bd7-b8da-604a1b0d4d30] Running
	I0429 00:00:18.957836   36356 system_pods.go:89] "etcd-ha-274394-m03" [64d0cf43-d3cd-4054-b44a-e8b4f8a70b06] Running
	I0429 00:00:18.957844   36356 system_pods.go:89] "kindnet-29qlf" [915875ab-c1aa-46d6-b5e1-b6a7eff8dd64] Running
	I0429 00:00:18.957851   36356 system_pods.go:89] "kindnet-6qf7q" [f00be25f-cefa-41ac-8c38-1d52f337e8b9] Running
	I0429 00:00:18.957859   36356 system_pods.go:89] "kindnet-p6qmw" [528219cb-5850-471c-97de-c31dcca693b1] Running
	I0429 00:00:18.957872   36356 system_pods.go:89] "kube-apiserver-ha-274394" [f20281d2-0f10-43b0-9a51-495d03b5a5c3] Running
	I0429 00:00:18.957880   36356 system_pods.go:89] "kube-apiserver-ha-274394-m02" [0f8b7b21-a990-447f-a3b8-6acdccf078d3] Running
	I0429 00:00:18.957897   36356 system_pods.go:89] "kube-apiserver-ha-274394-m03" [a9546d9d-7c2a-45c4-a0a5-a5efea4a04d9] Running
	I0429 00:00:18.957905   36356 system_pods.go:89] "kube-controller-manager-ha-274394" [8fb69743-3a7b-4fad-838c-a45e1667724c] Running
	I0429 00:00:18.957913   36356 system_pods.go:89] "kube-controller-manager-ha-274394-m02" [429f2ab6-9771-47b2-b827-d183897f6276] Running
	I0429 00:00:18.957924   36356 system_pods.go:89] "kube-controller-manager-ha-274394-m03" [f4094095-5c0c-4fb7-9c76-fb63e6c6eeb2] Running
	I0429 00:00:18.957932   36356 system_pods.go:89] "kube-proxy-4rb7k" [de261499-d4f2-44b0-869b-28ae3505f19f] Running
	I0429 00:00:18.957940   36356 system_pods.go:89] "kube-proxy-g95c9" [5be866d8-0014-44c7-a4cd-e93655e9c599] Running
	I0429 00:00:18.957947   36356 system_pods.go:89] "kube-proxy-pwbfs" [5303f947-6c3f-47b5-b396-33b92049d48f] Running
	I0429 00:00:18.957956   36356 system_pods.go:89] "kube-scheduler-ha-274394" [22d206f5-49cc-43d0-939e-249961518bb4] Running
	I0429 00:00:18.957968   36356 system_pods.go:89] "kube-scheduler-ha-274394-m02" [3371d359-adb1-4111-8ae1-44934bad24c3] Running
	I0429 00:00:18.957976   36356 system_pods.go:89] "kube-scheduler-ha-274394-m03" [7084f6de-4070-4d9b-b313-4b52f51123c7] Running
	I0429 00:00:18.957987   36356 system_pods.go:89] "kube-vip-ha-274394" [ce6151de-754a-4f15-94d4-71f4fb9cbd21] Running
	I0429 00:00:18.957996   36356 system_pods.go:89] "kube-vip-ha-274394-m02" [f276f128-37bf-4f93-a573-e6b491fa66cd] Running
	I0429 00:00:18.958005   36356 system_pods.go:89] "kube-vip-ha-274394-m03" [bd6c2740-2068-4849-a23b-56d9ce0ac21c] Running
	I0429 00:00:18.958014   36356 system_pods.go:89] "storage-provisioner" [b291d6ca-3a9b-4dd0-b0e9-a183347e7d26] Running
	I0429 00:00:18.958039   36356 system_pods.go:126] duration metric: took 212.530081ms to wait for k8s-apps to be running ...
	I0429 00:00:18.958053   36356 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 00:00:18.958113   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:00:18.980447   36356 system_svc.go:56] duration metric: took 22.384449ms WaitForService to wait for kubelet
	I0429 00:00:18.980482   36356 kubeadm.go:576] duration metric: took 20.281123012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 00:00:18.980513   36356 node_conditions.go:102] verifying NodePressure condition ...
	I0429 00:00:19.140458   36356 request.go:629] Waited for 159.863326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes
	I0429 00:00:19.140539   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes
	I0429 00:00:19.140546   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:19.140556   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:19.140562   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:19.145258   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:19.146414   36356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 00:00:19.146436   36356 node_conditions.go:123] node cpu capacity is 2
	I0429 00:00:19.146451   36356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 00:00:19.146457   36356 node_conditions.go:123] node cpu capacity is 2
	I0429 00:00:19.146462   36356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 00:00:19.146466   36356 node_conditions.go:123] node cpu capacity is 2
	I0429 00:00:19.146472   36356 node_conditions.go:105] duration metric: took 165.952797ms to run NodePressure ...
	I0429 00:00:19.146487   36356 start.go:240] waiting for startup goroutines ...
	I0429 00:00:19.146521   36356 start.go:254] writing updated cluster config ...
	I0429 00:00:19.146849   36356 ssh_runner.go:195] Run: rm -f paused
	I0429 00:00:19.201608   36356 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 00:00:19.204410   36356 out.go:177] * Done! kubectl is now configured to use "ha-274394" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.700779802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349031700748842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cce4929-83ef-4a9d-a821-40d33cb16add name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.703388752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38196304-8235-470b-a4b9-9001f627a350 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.703479686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38196304-8235-470b-a4b9-9001f627a350 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.703798784Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714348823628567057,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661893773308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661892775278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c766c3729b062ad9523a21758b7f93223bf47884319719f155df69e0c878c0d,PodSandboxId:f1817cc9d2fb29d92226070e777d7f2664e9716deffbfd22958ef7ad13f68141,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714348661665524227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229d446ccd2c11d44847ea2f5bb4f2085af2a5709495d0b888fc1d58d8389627,PodSandboxId:24061593c71f1368aae369b932213e75732db79a91d1d67f1141cc04179081c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143486
59851649944,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714348659690846296,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1144436f5b67a8616a5245d67f5f5000b19f39fd4aaa77c30a19d3feaf8eb036,PodSandboxId:f4f6e257f8d6f474550047de14882591cd7346735aaf472bb6094237b186f38f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714348643069752665,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb76ef860db5fc6bc2bb141383bf5a5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714348640048328200,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714348639992200258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9,PodSandboxId:d6f56935776d1dcd78c5fabfd595024640090664bcf02dab3ffe43581c3d1931,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714348639895135304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f,PodSandboxId:974770f9d2d8d35da0a33f54f885619933ec20d5542b45b5d69d7ad325a6cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714348639938867551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38196304-8235-470b-a4b9-9001f627a350 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.760179536Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e325ac5c-c163-4821-b825-7ce79374aa4f name=/runtime.v1.RuntimeService/Version
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.760312715Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e325ac5c-c163-4821-b825-7ce79374aa4f name=/runtime.v1.RuntimeService/Version
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.762259793Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=478bf059-8844-4358-b660-ba8789dfe194 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.762882912Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349031762856169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=478bf059-8844-4358-b660-ba8789dfe194 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.764042861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11027ab6-b2a8-4a5f-b38b-f1085d277508 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.764148088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11027ab6-b2a8-4a5f-b38b-f1085d277508 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.765086498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714348823628567057,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661893773308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661892775278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c766c3729b062ad9523a21758b7f93223bf47884319719f155df69e0c878c0d,PodSandboxId:f1817cc9d2fb29d92226070e777d7f2664e9716deffbfd22958ef7ad13f68141,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714348661665524227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229d446ccd2c11d44847ea2f5bb4f2085af2a5709495d0b888fc1d58d8389627,PodSandboxId:24061593c71f1368aae369b932213e75732db79a91d1d67f1141cc04179081c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143486
59851649944,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714348659690846296,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1144436f5b67a8616a5245d67f5f5000b19f39fd4aaa77c30a19d3feaf8eb036,PodSandboxId:f4f6e257f8d6f474550047de14882591cd7346735aaf472bb6094237b186f38f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714348643069752665,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb76ef860db5fc6bc2bb141383bf5a5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714348640048328200,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714348639992200258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9,PodSandboxId:d6f56935776d1dcd78c5fabfd595024640090664bcf02dab3ffe43581c3d1931,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714348639895135304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f,PodSandboxId:974770f9d2d8d35da0a33f54f885619933ec20d5542b45b5d69d7ad325a6cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714348639938867551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11027ab6-b2a8-4a5f-b38b-f1085d277508 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.825544162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb81217a-b284-4f44-bf29-86a0b20c38fe name=/runtime.v1.RuntimeService/Version
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.825640838Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb81217a-b284-4f44-bf29-86a0b20c38fe name=/runtime.v1.RuntimeService/Version
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.828131945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7613cc29-0916-4060-bd4e-b055ed72bd34 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.829415816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349031829388876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7613cc29-0916-4060-bd4e-b055ed72bd34 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.830033443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b9ace68-c043-4a3d-9d85-d5285e37b6e5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.830115521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b9ace68-c043-4a3d-9d85-d5285e37b6e5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.830355576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714348823628567057,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661893773308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661892775278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c766c3729b062ad9523a21758b7f93223bf47884319719f155df69e0c878c0d,PodSandboxId:f1817cc9d2fb29d92226070e777d7f2664e9716deffbfd22958ef7ad13f68141,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714348661665524227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229d446ccd2c11d44847ea2f5bb4f2085af2a5709495d0b888fc1d58d8389627,PodSandboxId:24061593c71f1368aae369b932213e75732db79a91d1d67f1141cc04179081c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143486
59851649944,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714348659690846296,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1144436f5b67a8616a5245d67f5f5000b19f39fd4aaa77c30a19d3feaf8eb036,PodSandboxId:f4f6e257f8d6f474550047de14882591cd7346735aaf472bb6094237b186f38f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714348643069752665,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb76ef860db5fc6bc2bb141383bf5a5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714348640048328200,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714348639992200258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9,PodSandboxId:d6f56935776d1dcd78c5fabfd595024640090664bcf02dab3ffe43581c3d1931,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714348639895135304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f,PodSandboxId:974770f9d2d8d35da0a33f54f885619933ec20d5542b45b5d69d7ad325a6cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714348639938867551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b9ace68-c043-4a3d-9d85-d5285e37b6e5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.879106749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca668e6a-2e30-409d-8069-2b79c4ceaa0f name=/runtime.v1.RuntimeService/Version
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.879200770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca668e6a-2e30-409d-8069-2b79c4ceaa0f name=/runtime.v1.RuntimeService/Version
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.880342234Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b805b342-3fbd-4d2d-a385-0a8cfc1710bd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.880812752Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349031880787506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b805b342-3fbd-4d2d-a385-0a8cfc1710bd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.881506599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60011752-d923-4685-89bf-4e377a35b5d8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.881556151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60011752-d923-4685-89bf-4e377a35b5d8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:03:51 ha-274394 crio[683]: time="2024-04-29 00:03:51.881778796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714348823628567057,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661893773308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661892775278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c766c3729b062ad9523a21758b7f93223bf47884319719f155df69e0c878c0d,PodSandboxId:f1817cc9d2fb29d92226070e777d7f2664e9716deffbfd22958ef7ad13f68141,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714348661665524227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229d446ccd2c11d44847ea2f5bb4f2085af2a5709495d0b888fc1d58d8389627,PodSandboxId:24061593c71f1368aae369b932213e75732db79a91d1d67f1141cc04179081c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143486
59851649944,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714348659690846296,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1144436f5b67a8616a5245d67f5f5000b19f39fd4aaa77c30a19d3feaf8eb036,PodSandboxId:f4f6e257f8d6f474550047de14882591cd7346735aaf472bb6094237b186f38f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714348643069752665,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb76ef860db5fc6bc2bb141383bf5a5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714348640048328200,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714348639992200258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9,PodSandboxId:d6f56935776d1dcd78c5fabfd595024640090664bcf02dab3ffe43581c3d1931,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714348639895135304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f,PodSandboxId:974770f9d2d8d35da0a33f54f885619933ec20d5542b45b5d69d7ad325a6cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714348639938867551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60011752-d923-4685-89bf-4e377a35b5d8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6191db59237ab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   7dc34422a092b       busybox-fc5497c4f-wwl6p
	39cef99138b5e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   86b45c3768b5c       coredns-7db6d8ff4d-rslhx
	4b75dd2cf8167       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0a16b0222b334       coredns-7db6d8ff4d-xkdcv
	2c766c3729b06       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   f1817cc9d2fb2       storage-provisioner
	229d446ccd2c1       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   24061593c71f1       kindnet-p6qmw
	10c90fba42aa7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      6 minutes ago       Running             kube-proxy                0                   fe59c57afd7dc       kube-proxy-pwbfs
	1144436f5b67a       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   f4f6e257f8d6f       kube-vip-ha-274394
	a2665b4434106       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   9792afe7047da       etcd-ha-274394
	cd7d63b0cf58d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      6 minutes ago       Running             kube-scheduler            0                   fb9c09a8e5609       kube-scheduler-ha-274394
	d4d50ed07ba22       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      6 minutes ago       Running             kube-apiserver            0                   974770f9d2d8d       kube-apiserver-ha-274394
	ec35813faf9fb       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      6 minutes ago       Running             kube-controller-manager   0                   d6f56935776d1       kube-controller-manager-ha-274394
	
	
	==> coredns [39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e] <==
	[INFO] 10.244.2.2:36735 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169991s
	[INFO] 10.244.1.2:33891 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130376s
	[INFO] 10.244.1.2:52014 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135334s
	[INFO] 10.244.1.2:38829 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00166462s
	[INFO] 10.244.1.2:60722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098874s
	[INFO] 10.244.0.4:48543 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092957s
	[INFO] 10.244.0.4:57804 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001823584s
	[INFO] 10.244.0.4:33350 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106647s
	[INFO] 10.244.0.4:39835 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220436s
	[INFO] 10.244.0.4:34474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060725s
	[INFO] 10.244.0.4:42677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076278s
	[INFO] 10.244.2.2:41566 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146322s
	[INFO] 10.244.2.2:39633 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160447s
	[INFO] 10.244.2.2:36533 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123881s
	[INFO] 10.244.1.2:54710 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162932s
	[INFO] 10.244.1.2:59010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096219s
	[INFO] 10.244.1.2:39468 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158565s
	[INFO] 10.244.0.4:45378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179168s
	[INFO] 10.244.0.4:52678 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091044s
	[INFO] 10.244.2.2:46078 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195018s
	[INFO] 10.244.2.2:47504 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268349s
	[INFO] 10.244.1.2:34168 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000161101s
	[INFO] 10.244.0.4:52891 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148878s
	[INFO] 10.244.0.4:43079 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155917s
	[INFO] 10.244.0.4:46898 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114218s
	
	
	==> coredns [4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a] <==
	[INFO] 10.244.2.2:46937 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.035529764s
	[INFO] 10.244.2.2:48074 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.014240201s
	[INFO] 10.244.1.2:36196 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000184659s
	[INFO] 10.244.1.2:52009 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000114923s
	[INFO] 10.244.0.4:54740 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000078827s
	[INFO] 10.244.0.4:52614 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00194917s
	[INFO] 10.244.2.2:33162 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162402s
	[INFO] 10.244.2.2:57592 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.023066556s
	[INFO] 10.244.2.2:57043 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000235049s
	[INFO] 10.244.1.2:47075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014599s
	[INFO] 10.244.1.2:60870 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002072779s
	[INFO] 10.244.1.2:46861 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094825s
	[INFO] 10.244.1.2:46908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186676s
	[INFO] 10.244.0.4:60188 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001709235s
	[INFO] 10.244.0.4:43834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109382s
	[INFO] 10.244.2.2:42186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000296079s
	[INFO] 10.244.1.2:44715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184251s
	[INFO] 10.244.0.4:45543 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116414s
	[INFO] 10.244.0.4:47556 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083226s
	[INFO] 10.244.2.2:59579 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198403s
	[INFO] 10.244.2.2:42196 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000278968s
	[INFO] 10.244.1.2:34121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222019s
	[INFO] 10.244.1.2:54334 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016838s
	[INFO] 10.244.1.2:37434 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099473s
	[INFO] 10.244.0.4:58711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000413259s
	
	
	==> describe nodes <==
	Name:               ha-274394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T23_57_27_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:57:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:03:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:00:31 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:00:31 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:00:31 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:00:31 +0000   Sun, 28 Apr 2024 23:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    ha-274394
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbc86a402e5548caa48d259a39be78de
	  System UUID:                bbc86a40-2e55-48ca-a48d-259a39be78de
	  Boot ID:                    b8dfffb5-63e7-4c7e-8e52-3cf4873fed01
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwl6p              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 coredns-7db6d8ff4d-rslhx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m13s
	  kube-system                 coredns-7db6d8ff4d-xkdcv             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m13s
	  kube-system                 etcd-ha-274394                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m27s
	  kube-system                 kindnet-p6qmw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m14s
	  kube-system                 kube-apiserver-ha-274394             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-controller-manager-ha-274394    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-proxy-pwbfs                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-scheduler-ha-274394             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-vip-ha-274394                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m12s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m33s (x6 over 6m33s)  kubelet          Node ha-274394 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m33s (x7 over 6m33s)  kubelet          Node ha-274394 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s (x6 over 6m33s)  kubelet          Node ha-274394 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m26s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m26s                  kubelet          Node ha-274394 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s                  kubelet          Node ha-274394 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s                  kubelet          Node ha-274394 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal  NodeReady                6m11s                  kubelet          Node ha-274394 status is now: NodeReady
	  Normal  RegisteredNode           4m59s                  node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	
	
	Name:               ha-274394-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T23_58_39_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:58:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:01:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 00:00:38 +0000   Mon, 29 Apr 2024 00:02:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 00:00:38 +0000   Mon, 29 Apr 2024 00:02:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 00:00:38 +0000   Mon, 29 Apr 2024 00:02:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 00:00:38 +0000   Mon, 29 Apr 2024 00:02:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-274394-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b55609ff590f4bdba17fff0e954879c9
	  System UUID:                b55609ff-590f-4bdb-a17f-ff0e954879c9
	  Boot ID:                    855b13e2-38c0-4157-be3d-1ab6ccd7558c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tmk6v                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-274394-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 kindnet-6qf7q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m16s
	  kube-system                 kube-apiserver-ha-274394-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-controller-manager-ha-274394-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-proxy-g95c9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-scheduler-ha-274394-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-vip-ha-274394-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m11s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m17s)  kubelet          Node ha-274394-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m17s)  kubelet          Node ha-274394-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m17s)  kubelet          Node ha-274394-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           4m59s                  node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-274394-m02 status is now: NodeNotReady
	
	
	Name:               ha-274394-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T23_59_58_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:59:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:03:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:00:25 +0000   Sun, 28 Apr 2024 23:59:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:00:25 +0000   Sun, 28 Apr 2024 23:59:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:00:25 +0000   Sun, 28 Apr 2024 23:59:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:00:25 +0000   Mon, 29 Apr 2024 00:00:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-274394-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d93714f0c10b4313b4406039da06a844
	  System UUID:                d93714f0-c10b-4313-b440-6039da06a844
	  Boot ID:                    f3fcc183-a68b-4912-a90c-8983fd2d233d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kjcqn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-274394-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m56s
	  kube-system                 kindnet-29qlf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m58s
	  kube-system                 kube-apiserver-ha-274394-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-ha-274394-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-proxy-4rb7k                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-scheduler-ha-274394-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-vip-ha-274394-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m58s (x8 over 3m58s)  kubelet          Node ha-274394-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s (x8 over 3m58s)  kubelet          Node ha-274394-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s (x7 over 3m58s)  kubelet          Node ha-274394-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	
	
	Name:               ha-274394-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T00_00_59_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:00:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:03:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:01:29 +0000   Mon, 29 Apr 2024 00:00:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:01:29 +0000   Mon, 29 Apr 2024 00:00:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:01:29 +0000   Mon, 29 Apr 2024 00:00:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:01:29 +0000   Mon, 29 Apr 2024 00:01:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    ha-274394-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eda4c6845a404536baab34c56e482672
	  System UUID:                eda4c684-5a40-4536-baab-34c56e482672
	  Boot ID:                    3678260a-6c98-4396-a49b-11d148407cb5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-r7wp2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m54s
	  kube-system                 kube-proxy-4h24n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  RegisteredNode           2m54s                  node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal  NodeHasSufficientMemory  2m54s (x2 over 2m54s)  kubelet          Node ha-274394-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m54s (x2 over 2m54s)  kubelet          Node ha-274394-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m54s (x2 over 2m54s)  kubelet          Node ha-274394-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-274394-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr28 23:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052234] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044810] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.652996] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.522934] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Apr28 23:57] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.108939] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.062174] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072067] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.188727] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.118445] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.277590] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +5.051195] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.066175] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.782579] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.939635] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.597447] systemd-fstab-generator[1372]: Ignoring "noauto" option for root device
	[  +0.110049] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.496389] kauditd_printk_skb: 21 callbacks suppressed
	[Apr28 23:58] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6] <==
	{"level":"warn","ts":"2024-04-29T00:03:52.177382Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.192003Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.196627Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.219201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.227273Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.227438Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.234726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.238253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.241975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.25032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.252561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.255628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.259548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.266105Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.269366Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.273414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.281894Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.288065Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.294407Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.298105Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.302526Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.307541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.313296Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.319491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:03:52.327272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:03:52 up 7 min,  0 users,  load average: 0.16, 0.24, 0.11
	Linux ha-274394 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [229d446ccd2c11d44847ea2f5bb4f2085af2a5709495d0b888fc1d58d8389627] <==
	I0429 00:03:21.359106       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:03:31.377267       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:03:31.377359       1 main.go:227] handling current node
	I0429 00:03:31.377384       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:03:31.377402       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:03:31.377574       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0429 00:03:31.377605       1 main.go:250] Node ha-274394-m03 has CIDR [10.244.2.0/24] 
	I0429 00:03:31.377659       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:03:31.377676       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:03:41.387289       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:03:41.387346       1 main.go:227] handling current node
	I0429 00:03:41.387361       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:03:41.387369       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:03:41.387501       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0429 00:03:41.387542       1 main.go:250] Node ha-274394-m03 has CIDR [10.244.2.0/24] 
	I0429 00:03:41.387657       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:03:41.387668       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:03:51.395746       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:03:51.395827       1 main.go:227] handling current node
	I0429 00:03:51.395845       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:03:51.395854       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:03:51.396062       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0429 00:03:51.396108       1 main.go:250] Node ha-274394-m03 has CIDR [10.244.2.0/24] 
	I0429 00:03:51.396181       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:03:51.396191       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f] <==
	E0428 23:58:36.868759       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0428 23:58:36.868653       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 13.607µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0428 23:58:36.870554       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0428 23:58:36.870719       1 timeout.go:142] post-timeout activity - time-elapsed: 2.339766ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0429 00:00:25.137040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40050: use of closed network connection
	E0429 00:00:25.379481       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40060: use of closed network connection
	E0429 00:00:25.604805       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40076: use of closed network connection
	E0429 00:00:25.871777       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40096: use of closed network connection
	E0429 00:00:26.091618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40108: use of closed network connection
	E0429 00:00:26.341033       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40122: use of closed network connection
	E0429 00:00:26.594661       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40132: use of closed network connection
	E0429 00:00:26.812137       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40148: use of closed network connection
	E0429 00:00:27.039810       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40166: use of closed network connection
	E0429 00:00:27.395521       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40194: use of closed network connection
	E0429 00:00:27.600587       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33016: use of closed network connection
	E0429 00:00:27.822892       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33036: use of closed network connection
	E0429 00:00:28.227332       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33060: use of closed network connection
	E0429 00:00:28.448673       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33082: use of closed network connection
	I0429 00:01:04.494213       1 trace.go:236] Trace[1312671356]: "Get" accept:application/json, */*,audit-id:59ccec51-1525-472a-acd9-d032d2c2bfbf,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Apr-2024 00:01:03.965) (total time: 528ms):
	Trace[1312671356]: ---"About to write a response" 528ms (00:01:04.494)
	Trace[1312671356]: [528.544822ms] [528.544822ms] END
	I0429 00:01:04.494822       1 trace.go:236] Trace[213141482]: "Update" accept:application/json, */*,audit-id:7b71b3b8-6c51-4af0-b485-7eb34cb112ec,client:192.168.39.237,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 00:01:03.939) (total time: 555ms):
	Trace[213141482]: ["GuaranteedUpdate etcd3" audit-id:7b71b3b8-6c51-4af0-b485-7eb34cb112ec,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 555ms (00:01:03.939)
	Trace[213141482]:  ---"Txn call completed" 554ms (00:01:04.494)]
	Trace[213141482]: [555.360928ms] [555.360928ms] END
	
	
	==> kube-controller-manager [ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9] <==
	I0428 23:58:38.037991       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-274394-m02"
	E0428 23:59:54.803847       1 certificate_controller.go:146] Sync csr-wdx26 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-wdx26": the object has been modified; please apply your changes to the latest version and try again
	I0428 23:59:54.892346       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-274394-m03\" does not exist"
	I0428 23:59:54.940848       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-274394-m03" podCIDRs=["10.244.2.0/24"]
	I0428 23:59:58.081684       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-274394-m03"
	I0429 00:00:20.257387       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.850346ms"
	I0429 00:00:20.462634       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="204.923458ms"
	I0429 00:00:20.642183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="179.010358ms"
	E0429 00:00:20.642327       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0429 00:00:20.663485       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.009121ms"
	I0429 00:00:20.664416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.99µs"
	I0429 00:00:24.116794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.639622ms"
	I0429 00:00:24.116983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.564µs"
	I0429 00:00:24.173133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.902432ms"
	I0429 00:00:24.173525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.207µs"
	I0429 00:00:24.532136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.587076ms"
	I0429 00:00:24.534021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="133.27µs"
	E0429 00:00:58.696735       1 certificate_controller.go:146] Sync csr-9ztb7 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-9ztb7": the object has been modified; please apply your changes to the latest version and try again
	I0429 00:00:58.906478       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-274394-m04\" does not exist"
	I0429 00:00:59.000800       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-274394-m04" podCIDRs=["10.244.3.0/24"]
	I0429 00:01:03.130368       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-274394-m04"
	I0429 00:01:10.233626       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-274394-m04"
	I0429 00:02:09.531221       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-274394-m04"
	I0429 00:02:09.671054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.148922ms"
	I0429 00:02:09.671354       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.538µs"
	
	
	==> kube-proxy [10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a] <==
	I0428 23:57:40.051962       1 server_linux.go:69] "Using iptables proxy"
	I0428 23:57:40.064077       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.237"]
	I0428 23:57:40.189337       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0428 23:57:40.189412       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0428 23:57:40.189431       1 server_linux.go:165] "Using iptables Proxier"
	I0428 23:57:40.192878       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0428 23:57:40.193163       1 server.go:872] "Version info" version="v1.30.0"
	I0428 23:57:40.193199       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0428 23:57:40.194579       1 config.go:192] "Starting service config controller"
	I0428 23:57:40.194626       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0428 23:57:40.194648       1 config.go:101] "Starting endpoint slice config controller"
	I0428 23:57:40.194651       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0428 23:57:40.195219       1 config.go:319] "Starting node config controller"
	I0428 23:57:40.195253       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0428 23:57:40.301194       1 shared_informer.go:320] Caches are synced for node config
	I0428 23:57:40.301247       1 shared_informer.go:320] Caches are synced for service config
	I0428 23:57:40.301268       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1] <==
	W0428 23:57:23.781454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0428 23:57:23.781572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0428 23:57:23.911255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0428 23:57:23.911314       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0428 23:57:23.977138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0428 23:57:23.977200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0428 23:57:24.127191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0428 23:57:24.127258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0428 23:57:24.129643       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0428 23:57:24.129701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0428 23:57:24.145975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0428 23:57:24.146030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0428 23:57:24.181216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0428 23:57:24.181243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0428 23:57:24.192501       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0428 23:57:24.192554       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0428 23:57:26.404129       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 00:00:20.266398       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wwl6p\": pod busybox-fc5497c4f-wwl6p is already assigned to node \"ha-274394\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wwl6p" node="ha-274394"
	E0429 00:00:20.266508       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kjcqn\": pod busybox-fc5497c4f-kjcqn is already assigned to node \"ha-274394-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-kjcqn" node="ha-274394-m03"
	E0429 00:00:20.271638       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a6a06956-e991-47ab-986f-34d9467a7dec(default/busybox-fc5497c4f-wwl6p) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wwl6p"
	E0429 00:00:20.272546       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wwl6p\": pod busybox-fc5497c4f-wwl6p is already assigned to node \"ha-274394\"" pod="default/busybox-fc5497c4f-wwl6p"
	I0429 00:00:20.273228       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wwl6p" node="ha-274394"
	E0429 00:00:20.271544       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 76314c87-6b7d-4bfa-83ce-3ace75fa7aee(default/busybox-fc5497c4f-kjcqn) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-kjcqn"
	E0429 00:00:20.273888       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kjcqn\": pod busybox-fc5497c4f-kjcqn is already assigned to node \"ha-274394-m03\"" pod="default/busybox-fc5497c4f-kjcqn"
	I0429 00:00:20.274053       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-kjcqn" node="ha-274394-m03"
	
	
	==> kubelet <==
	Apr 28 23:59:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 28 23:59:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:00:20 ha-274394 kubelet[1379]: I0429 00:00:20.232784    1379 topology_manager.go:215] "Topology Admit Handler" podUID="a6a06956-e991-47ab-986f-34d9467a7dec" podNamespace="default" podName="busybox-fc5497c4f-wwl6p"
	Apr 29 00:00:20 ha-274394 kubelet[1379]: I0429 00:00:20.358052    1379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvbb9\" (UniqueName: \"kubernetes.io/projected/a6a06956-e991-47ab-986f-34d9467a7dec-kube-api-access-wvbb9\") pod \"busybox-fc5497c4f-wwl6p\" (UID: \"a6a06956-e991-47ab-986f-34d9467a7dec\") " pod="default/busybox-fc5497c4f-wwl6p"
	Apr 29 00:00:26 ha-274394 kubelet[1379]: E0429 00:00:26.208715    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:00:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:00:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:00:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:00:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:00:28 ha-274394 kubelet[1379]: E0429 00:00:28.227727    1379 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52374->127.0.0.1:41399: write tcp 127.0.0.1:52374->127.0.0.1:41399: write: broken pipe
	Apr 29 00:01:26 ha-274394 kubelet[1379]: E0429 00:01:26.204575    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:01:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:01:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:01:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:01:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:02:26 ha-274394 kubelet[1379]: E0429 00:02:26.204755    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:02:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:02:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:02:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:02:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:03:26 ha-274394 kubelet[1379]: E0429 00:03:26.207355    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:03:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:03:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:03:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:03:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-274394 -n ha-274394
helpers_test.go:261: (dbg) Run:  kubectl --context ha-274394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr: exit status 3 (3.200069784s)

                                                
                                                
-- stdout --
	ha-274394
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-274394-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:03:57.045690   41210 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:03:57.045916   41210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:03:57.045924   41210 out.go:304] Setting ErrFile to fd 2...
	I0429 00:03:57.045929   41210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:03:57.046145   41210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:03:57.046338   41210 out.go:298] Setting JSON to false
	I0429 00:03:57.046361   41210 mustload.go:65] Loading cluster: ha-274394
	I0429 00:03:57.046422   41210 notify.go:220] Checking for updates...
	I0429 00:03:57.046768   41210 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:03:57.046783   41210 status.go:255] checking status of ha-274394 ...
	I0429 00:03:57.047351   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:57.047387   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:57.063341   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33031
	I0429 00:03:57.063829   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:57.064316   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:03:57.064338   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:57.064730   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:57.064911   41210 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0429 00:03:57.066779   41210 status.go:330] ha-274394 host status = "Running" (err=<nil>)
	I0429 00:03:57.066799   41210 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:03:57.067072   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:57.067107   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:57.083817   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0429 00:03:57.084445   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:57.084907   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:03:57.084926   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:57.085283   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:57.085465   41210 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:03:57.088287   41210 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:03:57.088802   41210 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:03:57.088835   41210 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:03:57.088956   41210 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:03:57.089239   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:57.089272   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:57.103574   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42333
	I0429 00:03:57.103977   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:57.104453   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:03:57.104473   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:57.104778   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:57.104961   41210 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:03:57.105129   41210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:03:57.105149   41210 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:03:57.107936   41210 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:03:57.108323   41210 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:03:57.108354   41210 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:03:57.108500   41210 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:03:57.108701   41210 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:03:57.108861   41210 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:03:57.109004   41210 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:03:57.190460   41210 ssh_runner.go:195] Run: systemctl --version
	I0429 00:03:57.197385   41210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:03:57.215591   41210 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:03:57.215616   41210 api_server.go:166] Checking apiserver status ...
	I0429 00:03:57.215648   41210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:03:57.232259   41210 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0429 00:03:57.246959   41210 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:03:57.247018   41210 ssh_runner.go:195] Run: ls
	I0429 00:03:57.252118   41210 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:03:57.258004   41210 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:03:57.258044   41210 status.go:422] ha-274394 apiserver status = Running (err=<nil>)
	I0429 00:03:57.258058   41210 status.go:257] ha-274394 status: &{Name:ha-274394 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:03:57.258073   41210 status.go:255] checking status of ha-274394-m02 ...
	I0429 00:03:57.258383   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:57.258431   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:57.274425   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0429 00:03:57.274875   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:57.275411   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:03:57.275442   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:57.275820   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:57.276045   41210 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0429 00:03:57.277740   41210 status.go:330] ha-274394-m02 host status = "Running" (err=<nil>)
	I0429 00:03:57.277756   41210 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:03:57.278223   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:57.278266   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:57.295606   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42199
	I0429 00:03:57.296096   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:57.296548   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:03:57.296573   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:57.296844   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:57.297015   41210 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0429 00:03:57.299526   41210 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:03:57.299945   41210 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:03:57.299979   41210 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:03:57.300091   41210 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:03:57.300377   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:57.300406   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:57.316075   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36371
	I0429 00:03:57.316463   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:57.316936   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:03:57.316960   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:57.317262   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:57.317440   41210 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0429 00:03:57.317624   41210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:03:57.317642   41210 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0429 00:03:57.319997   41210 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:03:57.320345   41210 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:03:57.320373   41210 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:03:57.320508   41210 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0429 00:03:57.320685   41210 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0429 00:03:57.320842   41210 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0429 00:03:57.320999   41210 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	W0429 00:03:59.818293   41210 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.43:22: connect: no route to host
	W0429 00:03:59.818414   41210 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	E0429 00:03:59.818438   41210 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:03:59.818445   41210 status.go:257] ha-274394-m02 status: &{Name:ha-274394-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 00:03:59.818462   41210 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:03:59.818472   41210 status.go:255] checking status of ha-274394-m03 ...
	I0429 00:03:59.818799   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:59.818854   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:59.833765   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42695
	I0429 00:03:59.834188   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:59.834774   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:03:59.834801   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:59.835142   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:59.835331   41210 main.go:141] libmachine: (ha-274394-m03) Calling .GetState
	I0429 00:03:59.836900   41210 status.go:330] ha-274394-m03 host status = "Running" (err=<nil>)
	I0429 00:03:59.836917   41210 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:03:59.837233   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:59.837274   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:59.851504   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0429 00:03:59.851887   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:59.852278   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:03:59.852299   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:59.852580   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:59.852750   41210 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0429 00:03:59.855157   41210 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:03:59.855571   41210 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:03:59.855598   41210 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:03:59.855739   41210 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:03:59.856010   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:03:59.856042   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:03:59.871133   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0429 00:03:59.871683   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:03:59.872232   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:03:59.872254   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:03:59.872573   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:03:59.872717   41210 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0429 00:03:59.872933   41210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:03:59.872958   41210 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0429 00:03:59.876019   41210 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:03:59.876488   41210 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:03:59.876510   41210 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:03:59.876732   41210 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0429 00:03:59.876928   41210 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0429 00:03:59.877124   41210 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0429 00:03:59.877344   41210 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0429 00:03:59.971414   41210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:03:59.989883   41210 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:03:59.989920   41210 api_server.go:166] Checking apiserver status ...
	I0429 00:03:59.989956   41210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:00.009859   41210 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0429 00:04:00.022169   41210 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:00.022226   41210 ssh_runner.go:195] Run: ls
	I0429 00:04:00.027681   41210 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:00.032278   41210 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:00.032305   41210 status.go:422] ha-274394-m03 apiserver status = Running (err=<nil>)
	I0429 00:04:00.032314   41210 status.go:257] ha-274394-m03 status: &{Name:ha-274394-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:00.032327   41210 status.go:255] checking status of ha-274394-m04 ...
	I0429 00:04:00.032591   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:00.032641   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:00.047361   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39085
	I0429 00:04:00.047740   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:00.048239   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:04:00.048266   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:00.048587   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:00.048767   41210 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:04:00.050268   41210 status.go:330] ha-274394-m04 host status = "Running" (err=<nil>)
	I0429 00:04:00.050282   41210 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:00.050567   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:00.050597   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:00.065880   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43923
	I0429 00:04:00.066264   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:00.066718   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:04:00.066741   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:00.067007   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:00.067219   41210 main.go:141] libmachine: (ha-274394-m04) Calling .GetIP
	I0429 00:04:00.069883   41210 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:00.070316   41210 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:00.070340   41210 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:00.070497   41210 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:00.070779   41210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:00.070829   41210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:00.084868   41210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34947
	I0429 00:04:00.085237   41210 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:00.085709   41210 main.go:141] libmachine: Using API Version  1
	I0429 00:04:00.085735   41210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:00.086003   41210 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:00.086204   41210 main.go:141] libmachine: (ha-274394-m04) Calling .DriverName
	I0429 00:04:00.086383   41210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:00.086407   41210 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHHostname
	I0429 00:04:00.088996   41210 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:00.089421   41210 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:00.089447   41210 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:00.089584   41210 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHPort
	I0429 00:04:00.089750   41210 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHKeyPath
	I0429 00:04:00.089861   41210 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHUsername
	I0429 00:04:00.089976   41210 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m04/id_rsa Username:docker}
	I0429 00:04:00.171015   41210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:00.190811   41210 status.go:257] ha-274394-m04 status: &{Name:ha-274394-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr: exit status 3 (5.143658941s)

                                                
                                                
-- stdout --
	ha-274394
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-274394-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:04:01.239309   41310 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:04:01.239568   41310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:01.239579   41310 out.go:304] Setting ErrFile to fd 2...
	I0429 00:04:01.239583   41310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:01.239843   41310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:04:01.240022   41310 out.go:298] Setting JSON to false
	I0429 00:04:01.240047   41310 mustload.go:65] Loading cluster: ha-274394
	I0429 00:04:01.240081   41310 notify.go:220] Checking for updates...
	I0429 00:04:01.240419   41310 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:04:01.240433   41310 status.go:255] checking status of ha-274394 ...
	I0429 00:04:01.240800   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:01.240881   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:01.255840   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I0429 00:04:01.256187   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:01.256904   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:01.256946   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:01.257251   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:01.257462   41310 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0429 00:04:01.259042   41310 status.go:330] ha-274394 host status = "Running" (err=<nil>)
	I0429 00:04:01.259065   41310 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:01.259351   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:01.259390   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:01.273317   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0429 00:04:01.273689   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:01.274135   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:01.274160   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:01.274447   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:01.274608   41310 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:04:01.277304   41310 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:01.277734   41310 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:01.277764   41310 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:01.277901   41310 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:01.278340   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:01.278387   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:01.293876   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41905
	I0429 00:04:01.294263   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:01.294733   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:01.294754   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:01.295042   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:01.295245   41310 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:04:01.295432   41310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:01.295457   41310 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:04:01.298442   41310 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:01.298874   41310 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:01.298915   41310 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:01.299046   41310 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:04:01.299223   41310 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:04:01.299397   41310 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:04:01.299576   41310 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:04:01.383861   41310 ssh_runner.go:195] Run: systemctl --version
	I0429 00:04:01.390600   41310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:01.407977   41310 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:01.408005   41310 api_server.go:166] Checking apiserver status ...
	I0429 00:04:01.408038   41310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:01.423361   41310 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0429 00:04:01.437317   41310 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:01.437365   41310 ssh_runner.go:195] Run: ls
	I0429 00:04:01.446308   41310 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:01.450487   41310 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:01.450508   41310 status.go:422] ha-274394 apiserver status = Running (err=<nil>)
	I0429 00:04:01.450520   41310 status.go:257] ha-274394 status: &{Name:ha-274394 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:01.450545   41310 status.go:255] checking status of ha-274394-m02 ...
	I0429 00:04:01.450813   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:01.450861   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:01.466438   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34853
	I0429 00:04:01.466880   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:01.467366   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:01.467388   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:01.467746   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:01.467953   41310 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0429 00:04:01.469372   41310 status.go:330] ha-274394-m02 host status = "Running" (err=<nil>)
	I0429 00:04:01.469390   41310 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:04:01.469796   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:01.469838   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:01.484473   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41391
	I0429 00:04:01.484917   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:01.485377   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:01.485395   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:01.485665   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:01.485828   41310 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0429 00:04:01.488071   41310 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:01.488414   41310 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:04:01.488450   41310 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:01.488535   41310 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:04:01.489410   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:01.489454   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:01.505167   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36629
	I0429 00:04:01.505534   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:01.505986   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:01.506008   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:01.506324   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:01.506536   41310 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0429 00:04:01.506744   41310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:01.506764   41310 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0429 00:04:01.509436   41310 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:01.509887   41310 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:04:01.509918   41310 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:01.510081   41310 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0429 00:04:01.510239   41310 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0429 00:04:01.510399   41310 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0429 00:04:01.510544   41310 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	W0429 00:04:02.886391   41310 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:02.886481   41310 retry.go:31] will retry after 372.120245ms: dial tcp 192.168.39.43:22: connect: no route to host
	W0429 00:04:05.958264   41310 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.43:22: connect: no route to host
	W0429 00:04:05.958361   41310 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	E0429 00:04:05.958388   41310 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:05.958401   41310 status.go:257] ha-274394-m02 status: &{Name:ha-274394-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 00:04:05.958428   41310 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:05.958437   41310 status.go:255] checking status of ha-274394-m03 ...
	I0429 00:04:05.958766   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:05.958824   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:05.973931   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46449
	I0429 00:04:05.974394   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:05.974926   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:05.974953   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:05.975332   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:05.975629   41310 main.go:141] libmachine: (ha-274394-m03) Calling .GetState
	I0429 00:04:05.977210   41310 status.go:330] ha-274394-m03 host status = "Running" (err=<nil>)
	I0429 00:04:05.977226   41310 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:05.977490   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:05.977544   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:05.993405   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0429 00:04:05.993784   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:05.994200   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:05.994222   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:05.994531   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:05.994721   41310 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0429 00:04:05.997173   41310 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:05.997586   41310 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:05.997610   41310 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:05.997734   41310 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:05.998082   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:05.998126   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:06.012598   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I0429 00:04:06.013066   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:06.013542   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:06.013562   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:06.013915   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:06.014124   41310 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0429 00:04:06.014345   41310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:06.014364   41310 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0429 00:04:06.017116   41310 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:06.017525   41310 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:06.017558   41310 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:06.017672   41310 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0429 00:04:06.017841   41310 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0429 00:04:06.017981   41310 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0429 00:04:06.018106   41310 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0429 00:04:06.106904   41310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:06.130111   41310 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:06.130142   41310 api_server.go:166] Checking apiserver status ...
	I0429 00:04:06.130179   41310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:06.147624   41310 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0429 00:04:06.157540   41310 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:06.157584   41310 ssh_runner.go:195] Run: ls
	I0429 00:04:06.163282   41310 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:06.167850   41310 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:06.167870   41310 status.go:422] ha-274394-m03 apiserver status = Running (err=<nil>)
	I0429 00:04:06.167878   41310 status.go:257] ha-274394-m03 status: &{Name:ha-274394-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:06.167891   41310 status.go:255] checking status of ha-274394-m04 ...
	I0429 00:04:06.168167   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:06.168214   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:06.182843   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I0429 00:04:06.183242   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:06.183715   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:06.183739   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:06.184042   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:06.184240   41310 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:04:06.185621   41310 status.go:330] ha-274394-m04 host status = "Running" (err=<nil>)
	I0429 00:04:06.185635   41310 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:06.185913   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:06.185952   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:06.200123   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46705
	I0429 00:04:06.200523   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:06.200938   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:06.200960   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:06.201280   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:06.201456   41310 main.go:141] libmachine: (ha-274394-m04) Calling .GetIP
	I0429 00:04:06.203764   41310 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:06.204177   41310 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:06.204204   41310 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:06.204347   41310 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:06.204643   41310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:06.204677   41310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:06.219920   41310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43503
	I0429 00:04:06.220312   41310 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:06.220736   41310 main.go:141] libmachine: Using API Version  1
	I0429 00:04:06.220761   41310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:06.221131   41310 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:06.221320   41310 main.go:141] libmachine: (ha-274394-m04) Calling .DriverName
	I0429 00:04:06.221523   41310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:06.221541   41310 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHHostname
	I0429 00:04:06.224038   41310 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:06.224459   41310 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:06.224509   41310 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:06.224640   41310 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHPort
	I0429 00:04:06.224808   41310 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHKeyPath
	I0429 00:04:06.224958   41310 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHUsername
	I0429 00:04:06.225091   41310 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m04/id_rsa Username:docker}
	I0429 00:04:06.307230   41310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:06.325732   41310 status.go:257] ha-274394-m04 status: &{Name:ha-274394-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr: exit status 3 (5.205356031s)

                                                
                                                
-- stdout --
	ha-274394
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-274394-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:04:07.336629   41412 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:04:07.336746   41412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:07.336755   41412 out.go:304] Setting ErrFile to fd 2...
	I0429 00:04:07.336759   41412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:07.336965   41412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:04:07.337129   41412 out.go:298] Setting JSON to false
	I0429 00:04:07.337154   41412 mustload.go:65] Loading cluster: ha-274394
	I0429 00:04:07.337275   41412 notify.go:220] Checking for updates...
	I0429 00:04:07.337580   41412 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:04:07.337597   41412 status.go:255] checking status of ha-274394 ...
	I0429 00:04:07.337951   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:07.338001   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:07.353569   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34871
	I0429 00:04:07.353940   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:07.354524   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:07.354545   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:07.354970   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:07.355194   41412 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0429 00:04:07.356839   41412 status.go:330] ha-274394 host status = "Running" (err=<nil>)
	I0429 00:04:07.356852   41412 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:07.357177   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:07.357216   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:07.371278   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0429 00:04:07.371612   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:07.372014   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:07.372050   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:07.372344   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:07.372521   41412 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:04:07.375230   41412 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:07.375637   41412 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:07.375663   41412 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:07.375823   41412 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:07.376091   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:07.376128   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:07.390489   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0429 00:04:07.390854   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:07.391277   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:07.391296   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:07.391623   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:07.391811   41412 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:04:07.391980   41412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:07.392021   41412 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:04:07.394701   41412 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:07.395122   41412 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:07.395142   41412 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:07.395299   41412 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:04:07.395482   41412 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:04:07.395619   41412 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:04:07.395787   41412 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:04:07.480274   41412 ssh_runner.go:195] Run: systemctl --version
	I0429 00:04:07.489090   41412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:07.512181   41412 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:07.512213   41412 api_server.go:166] Checking apiserver status ...
	I0429 00:04:07.512251   41412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:07.531893   41412 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0429 00:04:07.544837   41412 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:07.544915   41412 ssh_runner.go:195] Run: ls
	I0429 00:04:07.549867   41412 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:07.556352   41412 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:07.556373   41412 status.go:422] ha-274394 apiserver status = Running (err=<nil>)
	I0429 00:04:07.556383   41412 status.go:257] ha-274394 status: &{Name:ha-274394 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:07.556403   41412 status.go:255] checking status of ha-274394-m02 ...
	I0429 00:04:07.556682   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:07.556714   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:07.571278   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34445
	I0429 00:04:07.571728   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:07.572270   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:07.572299   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:07.572590   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:07.572793   41412 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0429 00:04:07.574287   41412 status.go:330] ha-274394-m02 host status = "Running" (err=<nil>)
	I0429 00:04:07.574307   41412 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:04:07.574573   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:07.574603   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:07.589268   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0429 00:04:07.589631   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:07.590097   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:07.590122   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:07.590410   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:07.590565   41412 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0429 00:04:07.593211   41412 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:07.593597   41412 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:04:07.593626   41412 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:07.593747   41412 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:04:07.594012   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:07.594064   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:07.610332   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0429 00:04:07.610759   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:07.611220   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:07.611244   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:07.611554   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:07.611771   41412 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0429 00:04:07.611964   41412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:07.611981   41412 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0429 00:04:07.614868   41412 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:07.615306   41412 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:04:07.615336   41412 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:07.615476   41412 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0429 00:04:07.615648   41412 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0429 00:04:07.615801   41412 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0429 00:04:07.615945   41412 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	W0429 00:04:09.030259   41412 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:09.030320   41412 retry.go:31] will retry after 313.93234ms: dial tcp 192.168.39.43:22: connect: no route to host
	W0429 00:04:12.102281   41412 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.43:22: connect: no route to host
	W0429 00:04:12.102389   41412 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	E0429 00:04:12.102406   41412 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:12.102413   41412 status.go:257] ha-274394-m02 status: &{Name:ha-274394-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 00:04:12.102437   41412 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:12.102445   41412 status.go:255] checking status of ha-274394-m03 ...
	I0429 00:04:12.102759   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:12.102801   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:12.118861   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0429 00:04:12.119349   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:12.119837   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:12.119856   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:12.120183   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:12.120363   41412 main.go:141] libmachine: (ha-274394-m03) Calling .GetState
	I0429 00:04:12.122113   41412 status.go:330] ha-274394-m03 host status = "Running" (err=<nil>)
	I0429 00:04:12.122129   41412 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:12.122458   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:12.122515   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:12.137467   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44847
	I0429 00:04:12.137927   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:12.138394   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:12.138419   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:12.138754   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:12.138929   41412 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0429 00:04:12.141741   41412 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:12.142224   41412 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:12.142261   41412 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:12.142421   41412 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:12.142763   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:12.142807   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:12.158003   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33623
	I0429 00:04:12.158432   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:12.158913   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:12.158937   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:12.159254   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:12.159448   41412 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0429 00:04:12.159644   41412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:12.159668   41412 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0429 00:04:12.162524   41412 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:12.162997   41412 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:12.163030   41412 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:12.163172   41412 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0429 00:04:12.163331   41412 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0429 00:04:12.163482   41412 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0429 00:04:12.163626   41412 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0429 00:04:12.253099   41412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:12.274486   41412 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:12.274515   41412 api_server.go:166] Checking apiserver status ...
	I0429 00:04:12.274554   41412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:12.299275   41412 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0429 00:04:12.315141   41412 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:12.315204   41412 ssh_runner.go:195] Run: ls
	I0429 00:04:12.320513   41412 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:12.325817   41412 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:12.325849   41412 status.go:422] ha-274394-m03 apiserver status = Running (err=<nil>)
	I0429 00:04:12.325860   41412 status.go:257] ha-274394-m03 status: &{Name:ha-274394-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:12.325879   41412 status.go:255] checking status of ha-274394-m04 ...
	I0429 00:04:12.326217   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:12.326253   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:12.341184   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0429 00:04:12.341564   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:12.342157   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:12.342184   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:12.342499   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:12.342690   41412 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:04:12.344208   41412 status.go:330] ha-274394-m04 host status = "Running" (err=<nil>)
	I0429 00:04:12.344224   41412 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:12.344601   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:12.344645   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:12.360586   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37525
	I0429 00:04:12.361129   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:12.361668   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:12.361692   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:12.362071   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:12.362279   41412 main.go:141] libmachine: (ha-274394-m04) Calling .GetIP
	I0429 00:04:12.365117   41412 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:12.365673   41412 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:12.365704   41412 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:12.365811   41412 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:12.366104   41412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:12.366143   41412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:12.381000   41412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33407
	I0429 00:04:12.381382   41412 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:12.381819   41412 main.go:141] libmachine: Using API Version  1
	I0429 00:04:12.381838   41412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:12.382224   41412 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:12.382394   41412 main.go:141] libmachine: (ha-274394-m04) Calling .DriverName
	I0429 00:04:12.382556   41412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:12.382577   41412 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHHostname
	I0429 00:04:12.385018   41412 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:12.385463   41412 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:12.385484   41412 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:12.385641   41412 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHPort
	I0429 00:04:12.385807   41412 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHKeyPath
	I0429 00:04:12.385923   41412 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHUsername
	I0429 00:04:12.386046   41412 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m04/id_rsa Username:docker}
	I0429 00:04:12.467326   41412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:12.485595   41412 status.go:257] ha-274394-m04 status: &{Name:ha-274394-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr: exit status 3 (4.781946927s)

                                                
                                                
-- stdout --
	ha-274394
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-274394-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:04:13.875663   41512 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:04:13.875836   41512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:13.875862   41512 out.go:304] Setting ErrFile to fd 2...
	I0429 00:04:13.875878   41512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:13.876462   41512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:04:13.876664   41512 out.go:298] Setting JSON to false
	I0429 00:04:13.876691   41512 mustload.go:65] Loading cluster: ha-274394
	I0429 00:04:13.876803   41512 notify.go:220] Checking for updates...
	I0429 00:04:13.877049   41512 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:04:13.877062   41512 status.go:255] checking status of ha-274394 ...
	I0429 00:04:13.877419   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:13.877479   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:13.896156   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37727
	I0429 00:04:13.896564   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:13.897252   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:13.897279   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:13.897768   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:13.898042   41512 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0429 00:04:13.899986   41512 status.go:330] ha-274394 host status = "Running" (err=<nil>)
	I0429 00:04:13.900004   41512 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:13.900389   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:13.900434   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:13.915099   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42841
	I0429 00:04:13.915536   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:13.915994   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:13.916015   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:13.916290   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:13.916430   41512 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:04:13.918772   41512 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:13.919157   41512 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:13.919181   41512 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:13.919296   41512 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:13.919558   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:13.919596   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:13.933766   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I0429 00:04:13.934205   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:13.934615   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:13.934638   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:13.934911   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:13.935100   41512 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:04:13.935280   41512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:13.935305   41512 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:04:13.937996   41512 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:13.938407   41512 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:13.938433   41512 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:13.938559   41512 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:04:13.938754   41512 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:04:13.938915   41512 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:04:13.939096   41512 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:04:14.024996   41512 ssh_runner.go:195] Run: systemctl --version
	I0429 00:04:14.032838   41512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:14.049285   41512 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:14.049320   41512 api_server.go:166] Checking apiserver status ...
	I0429 00:04:14.049366   41512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:14.070611   41512 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0429 00:04:14.083890   41512 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:14.083953   41512 ssh_runner.go:195] Run: ls
	I0429 00:04:14.090685   41512 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:14.095159   41512 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:14.095185   41512 status.go:422] ha-274394 apiserver status = Running (err=<nil>)
	I0429 00:04:14.095195   41512 status.go:257] ha-274394 status: &{Name:ha-274394 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:14.095210   41512 status.go:255] checking status of ha-274394-m02 ...
	I0429 00:04:14.095562   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:14.095629   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:14.111850   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40549
	I0429 00:04:14.112298   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:14.112870   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:14.112896   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:14.113267   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:14.113479   41512 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0429 00:04:14.115090   41512 status.go:330] ha-274394-m02 host status = "Running" (err=<nil>)
	I0429 00:04:14.115106   41512 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:04:14.115440   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:14.115488   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:14.130255   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38513
	I0429 00:04:14.130741   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:14.131262   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:14.131292   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:14.131652   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:14.131877   41512 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0429 00:04:14.134555   41512 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:14.134990   41512 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:04:14.135023   41512 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:14.135154   41512 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:04:14.135487   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:14.135521   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:14.149802   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0429 00:04:14.150210   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:14.150627   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:14.150647   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:14.150945   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:14.151134   41512 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0429 00:04:14.151315   41512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:14.151338   41512 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0429 00:04:14.154093   41512 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:14.154495   41512 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:04:14.154526   41512 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:14.154696   41512 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0429 00:04:14.154880   41512 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0429 00:04:14.155087   41512 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0429 00:04:14.155249   41512 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	W0429 00:04:15.174383   41512 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:15.174437   41512 retry.go:31] will retry after 319.748073ms: dial tcp 192.168.39.43:22: connect: no route to host
	W0429 00:04:18.246353   41512 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.43:22: connect: no route to host
	W0429 00:04:18.246436   41512 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	E0429 00:04:18.246459   41512 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:18.246469   41512 status.go:257] ha-274394-m02 status: &{Name:ha-274394-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 00:04:18.246506   41512 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:18.246521   41512 status.go:255] checking status of ha-274394-m03 ...
	I0429 00:04:18.246843   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:18.246895   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:18.261329   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0429 00:04:18.261719   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:18.262227   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:18.262250   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:18.262577   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:18.262763   41512 main.go:141] libmachine: (ha-274394-m03) Calling .GetState
	I0429 00:04:18.264224   41512 status.go:330] ha-274394-m03 host status = "Running" (err=<nil>)
	I0429 00:04:18.264237   41512 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:18.264511   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:18.264546   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:18.278229   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42857
	I0429 00:04:18.278664   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:18.279137   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:18.279161   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:18.279467   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:18.279635   41512 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0429 00:04:18.282343   41512 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:18.282810   41512 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:18.282856   41512 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:18.283007   41512 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:18.283372   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:18.283418   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:18.297055   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I0429 00:04:18.297399   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:18.297879   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:18.297901   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:18.298194   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:18.298369   41512 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0429 00:04:18.298522   41512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:18.298541   41512 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0429 00:04:18.301017   41512 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:18.301412   41512 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:18.301450   41512 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:18.301591   41512 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0429 00:04:18.301760   41512 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0429 00:04:18.301909   41512 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0429 00:04:18.302064   41512 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0429 00:04:18.392671   41512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:18.409840   41512 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:18.409867   41512 api_server.go:166] Checking apiserver status ...
	I0429 00:04:18.409906   41512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:18.425297   41512 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0429 00:04:18.435983   41512 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:18.436021   41512 ssh_runner.go:195] Run: ls
	I0429 00:04:18.441721   41512 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:18.448112   41512 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:18.448154   41512 status.go:422] ha-274394-m03 apiserver status = Running (err=<nil>)
	I0429 00:04:18.448164   41512 status.go:257] ha-274394-m03 status: &{Name:ha-274394-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:18.448180   41512 status.go:255] checking status of ha-274394-m04 ...
	I0429 00:04:18.448528   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:18.448570   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:18.463034   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36247
	I0429 00:04:18.463405   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:18.463897   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:18.463917   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:18.464236   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:18.464429   41512 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:04:18.466055   41512 status.go:330] ha-274394-m04 host status = "Running" (err=<nil>)
	I0429 00:04:18.466070   41512 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:18.466402   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:18.466442   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:18.480618   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0429 00:04:18.480995   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:18.481458   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:18.481476   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:18.481745   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:18.481918   41512 main.go:141] libmachine: (ha-274394-m04) Calling .GetIP
	I0429 00:04:18.484533   41512 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:18.484887   41512 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:18.484918   41512 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:18.485042   41512 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:18.485316   41512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:18.485355   41512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:18.499464   41512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43621
	I0429 00:04:18.499828   41512 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:18.500282   41512 main.go:141] libmachine: Using API Version  1
	I0429 00:04:18.500301   41512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:18.500584   41512 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:18.500786   41512 main.go:141] libmachine: (ha-274394-m04) Calling .DriverName
	I0429 00:04:18.500959   41512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:18.500977   41512 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHHostname
	I0429 00:04:18.503326   41512 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:18.503742   41512 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:18.503771   41512 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:18.503882   41512 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHPort
	I0429 00:04:18.504031   41512 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHKeyPath
	I0429 00:04:18.504159   41512 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHUsername
	I0429 00:04:18.504285   41512 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m04/id_rsa Username:docker}
	I0429 00:04:18.586676   41512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:18.603479   41512 status.go:257] ha-274394-m04 status: &{Name:ha-274394-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr: exit status 3 (3.74755379s)

                                                
                                                
-- stdout --
	ha-274394
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-274394-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:04:23.266767   41628 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:04:23.266883   41628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:23.266892   41628 out.go:304] Setting ErrFile to fd 2...
	I0429 00:04:23.266896   41628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:23.267051   41628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:04:23.267195   41628 out.go:298] Setting JSON to false
	I0429 00:04:23.267216   41628 mustload.go:65] Loading cluster: ha-274394
	I0429 00:04:23.267339   41628 notify.go:220] Checking for updates...
	I0429 00:04:23.267540   41628 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:04:23.267554   41628 status.go:255] checking status of ha-274394 ...
	I0429 00:04:23.267924   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:23.267973   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:23.284374   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44997
	I0429 00:04:23.284803   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:23.285393   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:23.285418   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:23.285858   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:23.286137   41628 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0429 00:04:23.287915   41628 status.go:330] ha-274394 host status = "Running" (err=<nil>)
	I0429 00:04:23.287935   41628 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:23.288379   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:23.288432   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:23.303259   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I0429 00:04:23.303674   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:23.304111   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:23.304139   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:23.304440   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:23.304600   41628 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:04:23.307102   41628 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:23.307529   41628 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:23.307561   41628 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:23.307636   41628 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:23.307908   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:23.307942   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:23.321554   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38303
	I0429 00:04:23.321901   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:23.322350   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:23.322369   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:23.322645   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:23.322835   41628 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:04:23.322992   41628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:23.323024   41628 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:04:23.325500   41628 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:23.325893   41628 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:23.325928   41628 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:23.326084   41628 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:04:23.326254   41628 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:04:23.326415   41628 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:04:23.326568   41628 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:04:23.414647   41628 ssh_runner.go:195] Run: systemctl --version
	I0429 00:04:23.421558   41628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:23.438974   41628 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:23.439000   41628 api_server.go:166] Checking apiserver status ...
	I0429 00:04:23.439031   41628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:23.456428   41628 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0429 00:04:23.472011   41628 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:23.472048   41628 ssh_runner.go:195] Run: ls
	I0429 00:04:23.477214   41628 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:23.481391   41628 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:23.481410   41628 status.go:422] ha-274394 apiserver status = Running (err=<nil>)
	I0429 00:04:23.481418   41628 status.go:257] ha-274394 status: &{Name:ha-274394 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:23.481443   41628 status.go:255] checking status of ha-274394-m02 ...
	I0429 00:04:23.481734   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:23.481772   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:23.498647   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42115
	I0429 00:04:23.499018   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:23.499502   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:23.499524   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:23.499829   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:23.499989   41628 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0429 00:04:23.501504   41628 status.go:330] ha-274394-m02 host status = "Running" (err=<nil>)
	I0429 00:04:23.501515   41628 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:04:23.501843   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:23.501882   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:23.515850   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38731
	I0429 00:04:23.516253   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:23.516665   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:23.516699   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:23.516967   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:23.517141   41628 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0429 00:04:23.519710   41628 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:23.520073   41628 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:04:23.520100   41628 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:23.520238   41628 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:04:23.520501   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:23.520530   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:23.534348   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I0429 00:04:23.534727   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:23.535162   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:23.535182   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:23.535439   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:23.535605   41628 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0429 00:04:23.535758   41628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:23.535781   41628 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0429 00:04:23.538414   41628 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:23.538852   41628 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:04:23.538887   41628 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:23.539034   41628 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0429 00:04:23.539208   41628 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0429 00:04:23.539345   41628 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0429 00:04:23.539478   41628 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	W0429 00:04:26.602238   41628 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.43:22: connect: no route to host
	W0429 00:04:26.602321   41628 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	E0429 00:04:26.602335   41628 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:26.602366   41628 status.go:257] ha-274394-m02 status: &{Name:ha-274394-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 00:04:26.602388   41628 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:26.602395   41628 status.go:255] checking status of ha-274394-m03 ...
	I0429 00:04:26.602683   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:26.602727   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:26.617597   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37899
	I0429 00:04:26.618035   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:26.618476   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:26.618500   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:26.618782   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:26.618969   41628 main.go:141] libmachine: (ha-274394-m03) Calling .GetState
	I0429 00:04:26.620375   41628 status.go:330] ha-274394-m03 host status = "Running" (err=<nil>)
	I0429 00:04:26.620400   41628 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:26.620685   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:26.620718   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:26.634736   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33057
	I0429 00:04:26.635080   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:26.635534   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:26.635551   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:26.635882   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:26.636079   41628 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0429 00:04:26.639005   41628 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:26.639473   41628 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:26.639498   41628 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:26.639693   41628 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:26.640012   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:26.640045   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:26.654481   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0429 00:04:26.654958   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:26.655470   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:26.655503   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:26.655831   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:26.655995   41628 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0429 00:04:26.656212   41628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:26.656234   41628 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0429 00:04:26.659042   41628 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:26.659422   41628 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:26.659451   41628 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:26.659563   41628 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0429 00:04:26.659729   41628 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0429 00:04:26.659920   41628 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0429 00:04:26.660060   41628 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0429 00:04:26.749888   41628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:26.766244   41628 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:26.766270   41628 api_server.go:166] Checking apiserver status ...
	I0429 00:04:26.766308   41628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:26.783060   41628 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0429 00:04:26.793526   41628 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:26.793572   41628 ssh_runner.go:195] Run: ls
	I0429 00:04:26.798478   41628 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:26.802985   41628 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:26.803005   41628 status.go:422] ha-274394-m03 apiserver status = Running (err=<nil>)
	I0429 00:04:26.803015   41628 status.go:257] ha-274394-m03 status: &{Name:ha-274394-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:26.803033   41628 status.go:255] checking status of ha-274394-m04 ...
	I0429 00:04:26.803302   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:26.803352   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:26.817735   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I0429 00:04:26.818114   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:26.818582   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:26.818602   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:26.818910   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:26.819075   41628 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:04:26.820500   41628 status.go:330] ha-274394-m04 host status = "Running" (err=<nil>)
	I0429 00:04:26.820526   41628 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:26.820894   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:26.820941   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:26.837324   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0429 00:04:26.837714   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:26.838117   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:26.838140   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:26.838513   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:26.838686   41628 main.go:141] libmachine: (ha-274394-m04) Calling .GetIP
	I0429 00:04:26.841412   41628 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:26.841827   41628 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:26.841861   41628 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:26.841961   41628 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:26.842367   41628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:26.842406   41628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:26.856262   41628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0429 00:04:26.856629   41628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:26.857080   41628 main.go:141] libmachine: Using API Version  1
	I0429 00:04:26.857100   41628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:26.857381   41628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:26.857558   41628 main.go:141] libmachine: (ha-274394-m04) Calling .DriverName
	I0429 00:04:26.857763   41628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:26.857795   41628 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHHostname
	I0429 00:04:26.860115   41628 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:26.860529   41628 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:26.860556   41628 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:26.860684   41628 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHPort
	I0429 00:04:26.860854   41628 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHKeyPath
	I0429 00:04:26.861018   41628 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHUsername
	I0429 00:04:26.861178   41628 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m04/id_rsa Username:docker}
	I0429 00:04:26.942449   41628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:26.958955   41628 status.go:257] ha-274394-m04 status: &{Name:ha-274394-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr: exit status 3 (3.762395406s)

                                                
                                                
-- stdout --
	ha-274394
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-274394-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:04:31.072440   41744 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:04:31.072682   41744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:31.072691   41744 out.go:304] Setting ErrFile to fd 2...
	I0429 00:04:31.072694   41744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:31.072855   41744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:04:31.073026   41744 out.go:298] Setting JSON to false
	I0429 00:04:31.073053   41744 mustload.go:65] Loading cluster: ha-274394
	I0429 00:04:31.073100   41744 notify.go:220] Checking for updates...
	I0429 00:04:31.073522   41744 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:04:31.073544   41744 status.go:255] checking status of ha-274394 ...
	I0429 00:04:31.074099   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:31.074139   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:31.088821   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43057
	I0429 00:04:31.089165   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:31.089713   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:31.089739   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:31.090142   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:31.090385   41744 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0429 00:04:31.091828   41744 status.go:330] ha-274394 host status = "Running" (err=<nil>)
	I0429 00:04:31.091840   41744 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:31.092098   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:31.092140   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:31.106659   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36581
	I0429 00:04:31.107034   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:31.107479   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:31.107508   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:31.107827   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:31.108064   41744 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:04:31.110745   41744 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:31.111175   41744 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:31.111200   41744 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:31.111360   41744 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:31.111652   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:31.111688   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:31.126592   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
	I0429 00:04:31.127018   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:31.127448   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:31.127469   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:31.127774   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:31.127945   41744 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:04:31.128168   41744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:31.128205   41744 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:04:31.130868   41744 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:31.131353   41744 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:31.131383   41744 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:31.131531   41744 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:04:31.131701   41744 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:04:31.131869   41744 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:04:31.132015   41744 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:04:31.218828   41744 ssh_runner.go:195] Run: systemctl --version
	I0429 00:04:31.225937   41744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:31.243013   41744 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:31.243039   41744 api_server.go:166] Checking apiserver status ...
	I0429 00:04:31.243075   41744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:31.259239   41744 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0429 00:04:31.271194   41744 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:31.271246   41744 ssh_runner.go:195] Run: ls
	I0429 00:04:31.276354   41744 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:31.282711   41744 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:31.282735   41744 status.go:422] ha-274394 apiserver status = Running (err=<nil>)
	I0429 00:04:31.282746   41744 status.go:257] ha-274394 status: &{Name:ha-274394 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:31.282760   41744 status.go:255] checking status of ha-274394-m02 ...
	I0429 00:04:31.283040   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:31.283078   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:31.297597   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41717
	I0429 00:04:31.298090   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:31.298622   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:31.298646   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:31.298995   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:31.299179   41744 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0429 00:04:31.301039   41744 status.go:330] ha-274394-m02 host status = "Running" (err=<nil>)
	I0429 00:04:31.301053   41744 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:04:31.301439   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:31.301484   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:31.318834   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45267
	I0429 00:04:31.319237   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:31.319670   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:31.319693   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:31.320132   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:31.320294   41744 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0429 00:04:31.323005   41744 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:31.323462   41744 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:04:31.323493   41744 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:31.323616   41744 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:04:31.323905   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:31.323937   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:31.339433   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0429 00:04:31.339781   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:31.340218   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:31.340243   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:31.340514   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:31.340714   41744 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0429 00:04:31.340907   41744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:31.340927   41744 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0429 00:04:31.343493   41744 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:31.343896   41744 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:04:31.343926   41744 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:04:31.344072   41744 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0429 00:04:31.344200   41744 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0429 00:04:31.344357   41744 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0429 00:04:31.344481   41744 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	W0429 00:04:34.410332   41744 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.43:22: connect: no route to host
	W0429 00:04:34.410460   41744 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	E0429 00:04:34.410485   41744 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:34.410504   41744 status.go:257] ha-274394-m02 status: &{Name:ha-274394-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 00:04:34.410531   41744 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	I0429 00:04:34.410542   41744 status.go:255] checking status of ha-274394-m03 ...
	I0429 00:04:34.411001   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:34.411050   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:34.425961   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40111
	I0429 00:04:34.426434   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:34.426899   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:34.426920   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:34.427299   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:34.427507   41744 main.go:141] libmachine: (ha-274394-m03) Calling .GetState
	I0429 00:04:34.429198   41744 status.go:330] ha-274394-m03 host status = "Running" (err=<nil>)
	I0429 00:04:34.429210   41744 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:34.429510   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:34.429553   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:34.445295   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0429 00:04:34.445712   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:34.446221   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:34.446243   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:34.446542   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:34.446795   41744 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0429 00:04:34.450095   41744 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:34.450482   41744 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:34.450514   41744 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:34.450670   41744 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:34.450988   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:34.451024   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:34.465751   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36611
	I0429 00:04:34.466198   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:34.466682   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:34.466708   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:34.467020   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:34.467182   41744 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0429 00:04:34.467350   41744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:34.467373   41744 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0429 00:04:34.469861   41744 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:34.470256   41744 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:34.470295   41744 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:34.470521   41744 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0429 00:04:34.470677   41744 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0429 00:04:34.470833   41744 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0429 00:04:34.470941   41744 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0429 00:04:34.563612   41744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:34.580404   41744 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:34.580441   41744 api_server.go:166] Checking apiserver status ...
	I0429 00:04:34.580484   41744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:34.596030   41744 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0429 00:04:34.607309   41744 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:34.607365   41744 ssh_runner.go:195] Run: ls
	I0429 00:04:34.613038   41744 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:34.619671   41744 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:34.619696   41744 status.go:422] ha-274394-m03 apiserver status = Running (err=<nil>)
	I0429 00:04:34.619719   41744 status.go:257] ha-274394-m03 status: &{Name:ha-274394-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:34.619733   41744 status.go:255] checking status of ha-274394-m04 ...
	I0429 00:04:34.620012   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:34.620045   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:34.635243   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38489
	I0429 00:04:34.635688   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:34.636272   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:34.636301   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:34.636636   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:34.636847   41744 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:04:34.638377   41744 status.go:330] ha-274394-m04 host status = "Running" (err=<nil>)
	I0429 00:04:34.638395   41744 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:34.638695   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:34.638727   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:34.653127   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44553
	I0429 00:04:34.653627   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:34.654147   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:34.654173   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:34.654509   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:34.654697   41744 main.go:141] libmachine: (ha-274394-m04) Calling .GetIP
	I0429 00:04:34.657693   41744 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:34.658304   41744 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:34.658333   41744 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:34.658490   41744 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:34.658908   41744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:34.658951   41744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:34.674397   41744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0429 00:04:34.674855   41744 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:34.675365   41744 main.go:141] libmachine: Using API Version  1
	I0429 00:04:34.675387   41744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:34.675684   41744 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:34.675857   41744 main.go:141] libmachine: (ha-274394-m04) Calling .DriverName
	I0429 00:04:34.676067   41744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:34.676085   41744 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHHostname
	I0429 00:04:34.678697   41744 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:34.679046   41744 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:34.679073   41744 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:34.679229   41744 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHPort
	I0429 00:04:34.679394   41744 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHKeyPath
	I0429 00:04:34.679500   41744 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHUsername
	I0429 00:04:34.679633   41744 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m04/id_rsa Username:docker}
	I0429 00:04:34.760455   41744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:34.779755   41744 status.go:257] ha-274394-m04 status: &{Name:ha-274394-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr: exit status 7 (671.821887ms)

                                                
                                                
-- stdout --
	ha-274394
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-274394-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:04:45.832606   41880 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:04:45.832722   41880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:45.832744   41880 out.go:304] Setting ErrFile to fd 2...
	I0429 00:04:45.832748   41880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:45.832945   41880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:04:45.833107   41880 out.go:298] Setting JSON to false
	I0429 00:04:45.833132   41880 mustload.go:65] Loading cluster: ha-274394
	I0429 00:04:45.833187   41880 notify.go:220] Checking for updates...
	I0429 00:04:45.833513   41880 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:04:45.833527   41880 status.go:255] checking status of ha-274394 ...
	I0429 00:04:45.833916   41880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:45.834001   41880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:45.852798   41880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I0429 00:04:45.853206   41880 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:45.853746   41880 main.go:141] libmachine: Using API Version  1
	I0429 00:04:45.853770   41880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:45.854170   41880 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:45.854363   41880 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0429 00:04:45.856060   41880 status.go:330] ha-274394 host status = "Running" (err=<nil>)
	I0429 00:04:45.856074   41880 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:45.856350   41880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:45.856415   41880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:45.871815   41880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42953
	I0429 00:04:45.872286   41880 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:45.872777   41880 main.go:141] libmachine: Using API Version  1
	I0429 00:04:45.872798   41880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:45.873121   41880 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:45.873298   41880 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:04:45.876276   41880 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:45.876690   41880 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:45.876725   41880 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:45.876852   41880 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:04:45.877266   41880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:45.877314   41880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:45.893369   41880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45011
	I0429 00:04:45.893832   41880 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:45.894375   41880 main.go:141] libmachine: Using API Version  1
	I0429 00:04:45.894400   41880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:45.894767   41880 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:45.894963   41880 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:04:45.895165   41880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:45.895204   41880 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:04:45.898157   41880 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:45.898577   41880 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:04:45.898598   41880 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:04:45.898781   41880 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:04:45.898971   41880 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:04:45.899109   41880 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:04:45.899288   41880 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:04:45.982892   41880 ssh_runner.go:195] Run: systemctl --version
	I0429 00:04:45.991169   41880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:46.011848   41880 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:46.011876   41880 api_server.go:166] Checking apiserver status ...
	I0429 00:04:46.011908   41880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:46.030480   41880 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W0429 00:04:46.044685   41880 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:46.044736   41880 ssh_runner.go:195] Run: ls
	I0429 00:04:46.050414   41880 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:46.058175   41880 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:46.058197   41880 status.go:422] ha-274394 apiserver status = Running (err=<nil>)
	I0429 00:04:46.058207   41880 status.go:257] ha-274394 status: &{Name:ha-274394 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:46.058225   41880 status.go:255] checking status of ha-274394-m02 ...
	I0429 00:04:46.058534   41880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:46.058573   41880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:46.073139   41880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0429 00:04:46.073549   41880 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:46.074078   41880 main.go:141] libmachine: Using API Version  1
	I0429 00:04:46.074100   41880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:46.074405   41880 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:46.074586   41880 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0429 00:04:46.076046   41880 status.go:330] ha-274394-m02 host status = "Stopped" (err=<nil>)
	I0429 00:04:46.076060   41880 status.go:343] host is not running, skipping remaining checks
	I0429 00:04:46.076068   41880 status.go:257] ha-274394-m02 status: &{Name:ha-274394-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:46.076086   41880 status.go:255] checking status of ha-274394-m03 ...
	I0429 00:04:46.076344   41880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:46.076386   41880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:46.092197   41880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0429 00:04:46.092616   41880 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:46.093178   41880 main.go:141] libmachine: Using API Version  1
	I0429 00:04:46.093203   41880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:46.093508   41880 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:46.093705   41880 main.go:141] libmachine: (ha-274394-m03) Calling .GetState
	I0429 00:04:46.095308   41880 status.go:330] ha-274394-m03 host status = "Running" (err=<nil>)
	I0429 00:04:46.095326   41880 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:46.095652   41880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:46.095688   41880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:46.110267   41880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I0429 00:04:46.110621   41880 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:46.111088   41880 main.go:141] libmachine: Using API Version  1
	I0429 00:04:46.111113   41880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:46.111402   41880 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:46.111576   41880 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0429 00:04:46.114431   41880 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:46.114883   41880 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:46.114910   41880 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:46.115042   41880 host.go:66] Checking if "ha-274394-m03" exists ...
	I0429 00:04:46.115333   41880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:46.115377   41880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:46.130101   41880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0429 00:04:46.130515   41880 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:46.131013   41880 main.go:141] libmachine: Using API Version  1
	I0429 00:04:46.131037   41880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:46.131334   41880 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:46.131546   41880 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0429 00:04:46.131722   41880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:46.131745   41880 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0429 00:04:46.134579   41880 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:46.134974   41880 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:46.134996   41880 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:46.135141   41880 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0429 00:04:46.135330   41880 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0429 00:04:46.135532   41880 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0429 00:04:46.135677   41880 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0429 00:04:46.226640   41880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:46.245410   41880 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:04:46.245440   41880 api_server.go:166] Checking apiserver status ...
	I0429 00:04:46.245474   41880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:04:46.261345   41880 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0429 00:04:46.272659   41880 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:04:46.272708   41880 ssh_runner.go:195] Run: ls
	I0429 00:04:46.277680   41880 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:04:46.286167   41880 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:04:46.286189   41880 status.go:422] ha-274394-m03 apiserver status = Running (err=<nil>)
	I0429 00:04:46.286197   41880 status.go:257] ha-274394-m03 status: &{Name:ha-274394-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:04:46.286211   41880 status.go:255] checking status of ha-274394-m04 ...
	I0429 00:04:46.286491   41880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:46.286525   41880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:46.302991   41880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0429 00:04:46.303399   41880 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:46.303852   41880 main.go:141] libmachine: Using API Version  1
	I0429 00:04:46.303872   41880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:46.304154   41880 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:46.304298   41880 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:04:46.305902   41880 status.go:330] ha-274394-m04 host status = "Running" (err=<nil>)
	I0429 00:04:46.305920   41880 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:46.306215   41880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:46.306253   41880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:46.321589   41880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0429 00:04:46.322065   41880 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:46.322511   41880 main.go:141] libmachine: Using API Version  1
	I0429 00:04:46.322534   41880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:46.322873   41880 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:46.323070   41880 main.go:141] libmachine: (ha-274394-m04) Calling .GetIP
	I0429 00:04:46.325827   41880 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:46.326291   41880 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:46.326330   41880 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:46.326515   41880 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:04:46.326861   41880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:46.326918   41880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:46.342606   41880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0429 00:04:46.343011   41880 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:46.343500   41880 main.go:141] libmachine: Using API Version  1
	I0429 00:04:46.343522   41880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:46.343803   41880 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:46.343980   41880 main.go:141] libmachine: (ha-274394-m04) Calling .DriverName
	I0429 00:04:46.344188   41880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:04:46.344206   41880 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHHostname
	I0429 00:04:46.347106   41880 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:46.347548   41880 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:46.347572   41880 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:46.347721   41880 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHPort
	I0429 00:04:46.347898   41880 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHKeyPath
	I0429 00:04:46.348053   41880 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHUsername
	I0429 00:04:46.348202   41880 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m04/id_rsa Username:docker}
	I0429 00:04:46.426591   41880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:04:46.441771   41880 status.go:257] ha-274394-m04 status: &{Name:ha-274394-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-274394 -n ha-274394
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-274394 logs -n 25: (1.607671484s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394:/home/docker/cp-test_ha-274394-m03_ha-274394.txt                       |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394 sudo cat                                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m03_ha-274394.txt                                 |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m02:/home/docker/cp-test_ha-274394-m03_ha-274394-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m02 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m03_ha-274394-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04:/home/docker/cp-test_ha-274394-m03_ha-274394-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m04 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m03_ha-274394-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp testdata/cp-test.txt                                                | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3174175435/001/cp-test_ha-274394-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394:/home/docker/cp-test_ha-274394-m04_ha-274394.txt                       |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394 sudo cat                                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394.txt                                 |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m02:/home/docker/cp-test_ha-274394-m04_ha-274394-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m02 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03:/home/docker/cp-test_ha-274394-m04_ha-274394-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m03 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-274394 node stop m02 -v=7                                                     | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-274394 node start m02 -v=7                                                    | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 23:56:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 23:56:44.603247   36356 out.go:291] Setting OutFile to fd 1 ...
	I0428 23:56:44.603339   36356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:56:44.603350   36356 out.go:304] Setting ErrFile to fd 2...
	I0428 23:56:44.603354   36356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:56:44.603524   36356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0428 23:56:44.604037   36356 out.go:298] Setting JSON to false
	I0428 23:56:44.604835   36356 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5949,"bootTime":1714342656,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0428 23:56:44.604886   36356 start.go:139] virtualization: kvm guest
	I0428 23:56:44.607006   36356 out.go:177] * [ha-274394] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0428 23:56:44.608416   36356 notify.go:220] Checking for updates...
	I0428 23:56:44.609889   36356 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 23:56:44.611307   36356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 23:56:44.612625   36356 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:56:44.613862   36356 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:56:44.615062   36356 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0428 23:56:44.616343   36356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 23:56:44.617967   36356 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 23:56:44.652686   36356 out.go:177] * Using the kvm2 driver based on user configuration
	I0428 23:56:44.653931   36356 start.go:297] selected driver: kvm2
	I0428 23:56:44.653943   36356 start.go:901] validating driver "kvm2" against <nil>
	I0428 23:56:44.653953   36356 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 23:56:44.654662   36356 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 23:56:44.654727   36356 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0428 23:56:44.669647   36356 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0428 23:56:44.669711   36356 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 23:56:44.669935   36356 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 23:56:44.669992   36356 cni.go:84] Creating CNI manager for ""
	I0428 23:56:44.670004   36356 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0428 23:56:44.670008   36356 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0428 23:56:44.670095   36356 start.go:340] cluster config:
	{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0428 23:56:44.670188   36356 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 23:56:44.672641   36356 out.go:177] * Starting "ha-274394" primary control-plane node in "ha-274394" cluster
	I0428 23:56:44.673990   36356 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:56:44.674079   36356 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0428 23:56:44.674091   36356 cache.go:56] Caching tarball of preloaded images
	I0428 23:56:44.674167   36356 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0428 23:56:44.674177   36356 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0428 23:56:44.674499   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:56:44.674522   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json: {Name:mka29a6cba1291c4c68f145dccef6ba110940a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:56:44.674652   36356 start.go:360] acquireMachinesLock for ha-274394: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 23:56:44.674679   36356 start.go:364] duration metric: took 14.805µs to acquireMachinesLock for "ha-274394"
	I0428 23:56:44.674692   36356 start.go:93] Provisioning new machine with config: &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:56:44.674751   36356 start.go:125] createHost starting for "" (driver="kvm2")
	I0428 23:56:44.676337   36356 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 23:56:44.676466   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:56:44.676503   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:56:44.690945   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37491
	I0428 23:56:44.691290   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:56:44.691875   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:56:44.691902   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:56:44.692184   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:56:44.692373   36356 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0428 23:56:44.692481   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:56:44.692639   36356 start.go:159] libmachine.API.Create for "ha-274394" (driver="kvm2")
	I0428 23:56:44.692672   36356 client.go:168] LocalClient.Create starting
	I0428 23:56:44.692707   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem
	I0428 23:56:44.692765   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:56:44.692791   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:56:44.692853   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem
	I0428 23:56:44.692882   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:56:44.692901   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:56:44.692925   36356 main.go:141] libmachine: Running pre-create checks...
	I0428 23:56:44.692937   36356 main.go:141] libmachine: (ha-274394) Calling .PreCreateCheck
	I0428 23:56:44.693213   36356 main.go:141] libmachine: (ha-274394) Calling .GetConfigRaw
	I0428 23:56:44.693560   36356 main.go:141] libmachine: Creating machine...
	I0428 23:56:44.693574   36356 main.go:141] libmachine: (ha-274394) Calling .Create
	I0428 23:56:44.693695   36356 main.go:141] libmachine: (ha-274394) Creating KVM machine...
	I0428 23:56:44.694900   36356 main.go:141] libmachine: (ha-274394) DBG | found existing default KVM network
	I0428 23:56:44.695582   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:44.695473   36379 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0428 23:56:44.695617   36356 main.go:141] libmachine: (ha-274394) DBG | created network xml: 
	I0428 23:56:44.695637   36356 main.go:141] libmachine: (ha-274394) DBG | <network>
	I0428 23:56:44.695651   36356 main.go:141] libmachine: (ha-274394) DBG |   <name>mk-ha-274394</name>
	I0428 23:56:44.695670   36356 main.go:141] libmachine: (ha-274394) DBG |   <dns enable='no'/>
	I0428 23:56:44.695685   36356 main.go:141] libmachine: (ha-274394) DBG |   
	I0428 23:56:44.695695   36356 main.go:141] libmachine: (ha-274394) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0428 23:56:44.695714   36356 main.go:141] libmachine: (ha-274394) DBG |     <dhcp>
	I0428 23:56:44.695730   36356 main.go:141] libmachine: (ha-274394) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0428 23:56:44.695740   36356 main.go:141] libmachine: (ha-274394) DBG |     </dhcp>
	I0428 23:56:44.695758   36356 main.go:141] libmachine: (ha-274394) DBG |   </ip>
	I0428 23:56:44.695763   36356 main.go:141] libmachine: (ha-274394) DBG |   
	I0428 23:56:44.695767   36356 main.go:141] libmachine: (ha-274394) DBG | </network>
	I0428 23:56:44.695774   36356 main.go:141] libmachine: (ha-274394) DBG | 
	I0428 23:56:44.700784   36356 main.go:141] libmachine: (ha-274394) DBG | trying to create private KVM network mk-ha-274394 192.168.39.0/24...
	I0428 23:56:44.765647   36356 main.go:141] libmachine: (ha-274394) DBG | private KVM network mk-ha-274394 192.168.39.0/24 created
	I0428 23:56:44.765680   36356 main.go:141] libmachine: (ha-274394) Setting up store path in /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394 ...
	I0428 23:56:44.765707   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:44.765600   36379 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:56:44.765726   36356 main.go:141] libmachine: (ha-274394) Building disk image from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0428 23:56:44.765810   36356 main.go:141] libmachine: (ha-274394) Downloading /home/jenkins/minikube-integration/17977-13393/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 23:56:44.991025   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:44.990901   36379 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa...
	I0428 23:56:45.061669   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:45.061561   36379 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/ha-274394.rawdisk...
	I0428 23:56:45.061712   36356 main.go:141] libmachine: (ha-274394) DBG | Writing magic tar header
	I0428 23:56:45.061726   36356 main.go:141] libmachine: (ha-274394) DBG | Writing SSH key tar header
	I0428 23:56:45.061742   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:45.061686   36379 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394 ...
	I0428 23:56:45.061871   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394
	I0428 23:56:45.061933   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394 (perms=drwx------)
	I0428 23:56:45.061961   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines
	I0428 23:56:45.061982   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:56:45.061998   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393
	I0428 23:56:45.062011   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines (perms=drwxr-xr-x)
	I0428 23:56:45.062043   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube (perms=drwxr-xr-x)
	I0428 23:56:45.062056   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393 (perms=drwxrwxr-x)
	I0428 23:56:45.062068   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0428 23:56:45.062082   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0428 23:56:45.062094   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home/jenkins
	I0428 23:56:45.062107   36356 main.go:141] libmachine: (ha-274394) DBG | Checking permissions on dir: /home
	I0428 23:56:45.062116   36356 main.go:141] libmachine: (ha-274394) DBG | Skipping /home - not owner
	I0428 23:56:45.062127   36356 main.go:141] libmachine: (ha-274394) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0428 23:56:45.062136   36356 main.go:141] libmachine: (ha-274394) Creating domain...
	I0428 23:56:45.062965   36356 main.go:141] libmachine: (ha-274394) define libvirt domain using xml: 
	I0428 23:56:45.062990   36356 main.go:141] libmachine: (ha-274394) <domain type='kvm'>
	I0428 23:56:45.063000   36356 main.go:141] libmachine: (ha-274394)   <name>ha-274394</name>
	I0428 23:56:45.063009   36356 main.go:141] libmachine: (ha-274394)   <memory unit='MiB'>2200</memory>
	I0428 23:56:45.063020   36356 main.go:141] libmachine: (ha-274394)   <vcpu>2</vcpu>
	I0428 23:56:45.063029   36356 main.go:141] libmachine: (ha-274394)   <features>
	I0428 23:56:45.063041   36356 main.go:141] libmachine: (ha-274394)     <acpi/>
	I0428 23:56:45.063045   36356 main.go:141] libmachine: (ha-274394)     <apic/>
	I0428 23:56:45.063053   36356 main.go:141] libmachine: (ha-274394)     <pae/>
	I0428 23:56:45.063058   36356 main.go:141] libmachine: (ha-274394)     
	I0428 23:56:45.063066   36356 main.go:141] libmachine: (ha-274394)   </features>
	I0428 23:56:45.063071   36356 main.go:141] libmachine: (ha-274394)   <cpu mode='host-passthrough'>
	I0428 23:56:45.063078   36356 main.go:141] libmachine: (ha-274394)   
	I0428 23:56:45.063085   36356 main.go:141] libmachine: (ha-274394)   </cpu>
	I0428 23:56:45.063111   36356 main.go:141] libmachine: (ha-274394)   <os>
	I0428 23:56:45.063132   36356 main.go:141] libmachine: (ha-274394)     <type>hvm</type>
	I0428 23:56:45.063145   36356 main.go:141] libmachine: (ha-274394)     <boot dev='cdrom'/>
	I0428 23:56:45.063156   36356 main.go:141] libmachine: (ha-274394)     <boot dev='hd'/>
	I0428 23:56:45.063168   36356 main.go:141] libmachine: (ha-274394)     <bootmenu enable='no'/>
	I0428 23:56:45.063177   36356 main.go:141] libmachine: (ha-274394)   </os>
	I0428 23:56:45.063188   36356 main.go:141] libmachine: (ha-274394)   <devices>
	I0428 23:56:45.063199   36356 main.go:141] libmachine: (ha-274394)     <disk type='file' device='cdrom'>
	I0428 23:56:45.063232   36356 main.go:141] libmachine: (ha-274394)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/boot2docker.iso'/>
	I0428 23:56:45.063257   36356 main.go:141] libmachine: (ha-274394)       <target dev='hdc' bus='scsi'/>
	I0428 23:56:45.063269   36356 main.go:141] libmachine: (ha-274394)       <readonly/>
	I0428 23:56:45.063287   36356 main.go:141] libmachine: (ha-274394)     </disk>
	I0428 23:56:45.063297   36356 main.go:141] libmachine: (ha-274394)     <disk type='file' device='disk'>
	I0428 23:56:45.063303   36356 main.go:141] libmachine: (ha-274394)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0428 23:56:45.063311   36356 main.go:141] libmachine: (ha-274394)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/ha-274394.rawdisk'/>
	I0428 23:56:45.063319   36356 main.go:141] libmachine: (ha-274394)       <target dev='hda' bus='virtio'/>
	I0428 23:56:45.063323   36356 main.go:141] libmachine: (ha-274394)     </disk>
	I0428 23:56:45.063332   36356 main.go:141] libmachine: (ha-274394)     <interface type='network'>
	I0428 23:56:45.063338   36356 main.go:141] libmachine: (ha-274394)       <source network='mk-ha-274394'/>
	I0428 23:56:45.063344   36356 main.go:141] libmachine: (ha-274394)       <model type='virtio'/>
	I0428 23:56:45.063349   36356 main.go:141] libmachine: (ha-274394)     </interface>
	I0428 23:56:45.063359   36356 main.go:141] libmachine: (ha-274394)     <interface type='network'>
	I0428 23:56:45.063379   36356 main.go:141] libmachine: (ha-274394)       <source network='default'/>
	I0428 23:56:45.063391   36356 main.go:141] libmachine: (ha-274394)       <model type='virtio'/>
	I0428 23:56:45.063403   36356 main.go:141] libmachine: (ha-274394)     </interface>
	I0428 23:56:45.063417   36356 main.go:141] libmachine: (ha-274394)     <serial type='pty'>
	I0428 23:56:45.063428   36356 main.go:141] libmachine: (ha-274394)       <target port='0'/>
	I0428 23:56:45.063437   36356 main.go:141] libmachine: (ha-274394)     </serial>
	I0428 23:56:45.063445   36356 main.go:141] libmachine: (ha-274394)     <console type='pty'>
	I0428 23:56:45.063455   36356 main.go:141] libmachine: (ha-274394)       <target type='serial' port='0'/>
	I0428 23:56:45.063474   36356 main.go:141] libmachine: (ha-274394)     </console>
	I0428 23:56:45.063484   36356 main.go:141] libmachine: (ha-274394)     <rng model='virtio'>
	I0428 23:56:45.063512   36356 main.go:141] libmachine: (ha-274394)       <backend model='random'>/dev/random</backend>
	I0428 23:56:45.063538   36356 main.go:141] libmachine: (ha-274394)     </rng>
	I0428 23:56:45.063550   36356 main.go:141] libmachine: (ha-274394)     
	I0428 23:56:45.063572   36356 main.go:141] libmachine: (ha-274394)     
	I0428 23:56:45.063584   36356 main.go:141] libmachine: (ha-274394)   </devices>
	I0428 23:56:45.063593   36356 main.go:141] libmachine: (ha-274394) </domain>
	I0428 23:56:45.063601   36356 main.go:141] libmachine: (ha-274394) 
	I0428 23:56:45.067836   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a6:1d:f8 in network default
	I0428 23:56:45.068304   36356 main.go:141] libmachine: (ha-274394) Ensuring networks are active...
	I0428 23:56:45.068323   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:45.068905   36356 main.go:141] libmachine: (ha-274394) Ensuring network default is active
	I0428 23:56:45.069173   36356 main.go:141] libmachine: (ha-274394) Ensuring network mk-ha-274394 is active
	I0428 23:56:45.069648   36356 main.go:141] libmachine: (ha-274394) Getting domain xml...
	I0428 23:56:45.070358   36356 main.go:141] libmachine: (ha-274394) Creating domain...
	I0428 23:56:46.229124   36356 main.go:141] libmachine: (ha-274394) Waiting to get IP...
	I0428 23:56:46.229873   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:46.230293   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:46.230321   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:46.230266   36379 retry.go:31] will retry after 256.079887ms: waiting for machine to come up
	I0428 23:56:46.487746   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:46.488167   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:46.488190   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:46.488135   36379 retry.go:31] will retry after 259.573037ms: waiting for machine to come up
	I0428 23:56:46.749564   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:46.749940   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:46.749971   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:46.749894   36379 retry.go:31] will retry after 421.248911ms: waiting for machine to come up
	I0428 23:56:47.172578   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:47.173101   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:47.173132   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:47.173077   36379 retry.go:31] will retry after 446.554138ms: waiting for machine to come up
	I0428 23:56:47.621636   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:47.622039   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:47.622068   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:47.621985   36379 retry.go:31] will retry after 623.05137ms: waiting for machine to come up
	I0428 23:56:48.246898   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:48.247325   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:48.247347   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:48.247304   36379 retry.go:31] will retry after 674.412309ms: waiting for machine to come up
	I0428 23:56:48.922759   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:48.923073   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:48.923103   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:48.923031   36379 retry.go:31] will retry after 750.488538ms: waiting for machine to come up
	I0428 23:56:49.675196   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:49.675579   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:49.675614   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:49.675525   36379 retry.go:31] will retry after 1.274430052s: waiting for machine to come up
	I0428 23:56:50.951373   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:50.951753   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:50.951780   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:50.951712   36379 retry.go:31] will retry after 1.440496033s: waiting for machine to come up
	I0428 23:56:52.393417   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:52.393792   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:52.393814   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:52.393746   36379 retry.go:31] will retry after 2.10240003s: waiting for machine to come up
	I0428 23:56:54.497430   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:54.497829   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:54.497858   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:54.497777   36379 retry.go:31] will retry after 1.935763747s: waiting for machine to come up
	I0428 23:56:56.434877   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:56.435313   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:56.435343   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:56.435254   36379 retry.go:31] will retry after 2.246149526s: waiting for machine to come up
	I0428 23:56:58.684702   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:56:58.685119   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:56:58.685143   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:56:58.685091   36379 retry.go:31] will retry after 2.753267841s: waiting for machine to come up
	I0428 23:57:01.439496   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:01.439726   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find current IP address of domain ha-274394 in network mk-ha-274394
	I0428 23:57:01.439748   36356 main.go:141] libmachine: (ha-274394) DBG | I0428 23:57:01.439695   36379 retry.go:31] will retry after 4.35224695s: waiting for machine to come up
	I0428 23:57:05.794060   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:05.794442   36356 main.go:141] libmachine: (ha-274394) Found IP for machine: 192.168.39.237
	I0428 23:57:05.794467   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has current primary IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:05.794476   36356 main.go:141] libmachine: (ha-274394) Reserving static IP address...
	I0428 23:57:05.794825   36356 main.go:141] libmachine: (ha-274394) DBG | unable to find host DHCP lease matching {name: "ha-274394", mac: "52:54:00:a1:02:06", ip: "192.168.39.237"} in network mk-ha-274394
	I0428 23:57:05.865762   36356 main.go:141] libmachine: (ha-274394) DBG | Getting to WaitForSSH function...
	I0428 23:57:05.865787   36356 main.go:141] libmachine: (ha-274394) Reserved static IP address: 192.168.39.237
	I0428 23:57:05.865796   36356 main.go:141] libmachine: (ha-274394) Waiting for SSH to be available...
	I0428 23:57:05.868238   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:05.868679   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:05.868710   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:05.868753   36356 main.go:141] libmachine: (ha-274394) DBG | Using SSH client type: external
	I0428 23:57:05.868785   36356 main.go:141] libmachine: (ha-274394) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa (-rw-------)
	I0428 23:57:05.868821   36356 main.go:141] libmachine: (ha-274394) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0428 23:57:05.868837   36356 main.go:141] libmachine: (ha-274394) DBG | About to run SSH command:
	I0428 23:57:05.868862   36356 main.go:141] libmachine: (ha-274394) DBG | exit 0
	I0428 23:57:05.995181   36356 main.go:141] libmachine: (ha-274394) DBG | SSH cmd err, output: <nil>: 
	I0428 23:57:05.995442   36356 main.go:141] libmachine: (ha-274394) KVM machine creation complete!
	I0428 23:57:05.995758   36356 main.go:141] libmachine: (ha-274394) Calling .GetConfigRaw
	I0428 23:57:05.996317   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:05.996503   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:05.996642   36356 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0428 23:57:05.996686   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:57:05.998130   36356 main.go:141] libmachine: Detecting operating system of created instance...
	I0428 23:57:05.998147   36356 main.go:141] libmachine: Waiting for SSH to be available...
	I0428 23:57:05.998155   36356 main.go:141] libmachine: Getting to WaitForSSH function...
	I0428 23:57:05.998161   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.000506   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.000786   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.000807   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.001000   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.001156   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.001304   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.001400   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.001519   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:06.001696   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:06.001705   36356 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0428 23:57:06.105745   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:57:06.105775   36356 main.go:141] libmachine: Detecting the provisioner...
	I0428 23:57:06.105785   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.108180   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.108463   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.108483   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.108623   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.108837   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.108991   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.109095   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.109257   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:06.109433   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:06.109445   36356 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0428 23:57:06.219616   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0428 23:57:06.219678   36356 main.go:141] libmachine: found compatible host: buildroot
	I0428 23:57:06.219688   36356 main.go:141] libmachine: Provisioning with buildroot...
	I0428 23:57:06.219711   36356 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0428 23:57:06.219970   36356 buildroot.go:166] provisioning hostname "ha-274394"
	I0428 23:57:06.219993   36356 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0428 23:57:06.220152   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.222516   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.222928   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.222956   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.223088   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.223273   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.223400   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.223530   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.223732   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:06.223884   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:06.223895   36356 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-274394 && echo "ha-274394" | sudo tee /etc/hostname
	I0428 23:57:06.346120   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-274394
	
	I0428 23:57:06.346147   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.348916   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.349259   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.349290   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.349437   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.349605   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.349770   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.349918   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.350107   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:06.350259   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:06.350280   36356 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-274394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-274394/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-274394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 23:57:06.464092   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:57:06.464118   36356 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0428 23:57:06.464151   36356 buildroot.go:174] setting up certificates
	I0428 23:57:06.464163   36356 provision.go:84] configureAuth start
	I0428 23:57:06.464185   36356 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0428 23:57:06.464469   36356 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0428 23:57:06.467030   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.467355   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.467387   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.467540   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.470563   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.470888   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.470907   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.471082   36356 provision.go:143] copyHostCerts
	I0428 23:57:06.471126   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:57:06.471183   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0428 23:57:06.471207   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:57:06.471291   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0428 23:57:06.471386   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:57:06.471410   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0428 23:57:06.471420   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:57:06.471456   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0428 23:57:06.471517   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:57:06.471540   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0428 23:57:06.471549   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:57:06.471584   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0428 23:57:06.471645   36356 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.ha-274394 san=[127.0.0.1 192.168.39.237 ha-274394 localhost minikube]
	I0428 23:57:06.573643   36356 provision.go:177] copyRemoteCerts
	I0428 23:57:06.573696   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 23:57:06.573720   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.576152   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.576514   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.576544   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.576665   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.576843   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.577001   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.577123   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:06.663863   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0428 23:57:06.663955   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 23:57:06.694572   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0428 23:57:06.694632   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0428 23:57:06.722982   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0428 23:57:06.723037   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0428 23:57:06.751340   36356 provision.go:87] duration metric: took 287.163137ms to configureAuth
	I0428 23:57:06.751365   36356 buildroot.go:189] setting minikube options for container-runtime
	I0428 23:57:06.751508   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:57:06.751564   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:06.753881   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.754233   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:06.754262   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:06.754433   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:06.754591   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.754749   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:06.754852   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:06.754990   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:06.755149   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:06.755166   36356 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0428 23:57:07.035413   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0428 23:57:07.035450   36356 main.go:141] libmachine: Checking connection to Docker...
	I0428 23:57:07.035475   36356 main.go:141] libmachine: (ha-274394) Calling .GetURL
	I0428 23:57:07.036800   36356 main.go:141] libmachine: (ha-274394) DBG | Using libvirt version 6000000
	I0428 23:57:07.038840   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.039121   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.039148   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.039301   36356 main.go:141] libmachine: Docker is up and running!
	I0428 23:57:07.039313   36356 main.go:141] libmachine: Reticulating splines...
	I0428 23:57:07.039321   36356 client.go:171] duration metric: took 22.346638475s to LocalClient.Create
	I0428 23:57:07.039346   36356 start.go:167] duration metric: took 22.346709049s to libmachine.API.Create "ha-274394"
	I0428 23:57:07.039358   36356 start.go:293] postStartSetup for "ha-274394" (driver="kvm2")
	I0428 23:57:07.039372   36356 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 23:57:07.039392   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:07.039621   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 23:57:07.039654   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:07.041418   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.041695   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.041721   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.041838   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:07.042035   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:07.042193   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:07.042358   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:07.126553   36356 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 23:57:07.131394   36356 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 23:57:07.131418   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0428 23:57:07.131489   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0428 23:57:07.131582   36356 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0428 23:57:07.131595   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /etc/ssl/certs/207272.pem
	I0428 23:57:07.131731   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 23:57:07.143264   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:57:07.169021   36356 start.go:296] duration metric: took 129.64708ms for postStartSetup
	I0428 23:57:07.169063   36356 main.go:141] libmachine: (ha-274394) Calling .GetConfigRaw
	I0428 23:57:07.169591   36356 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0428 23:57:07.172044   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.172385   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.172414   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.172640   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:57:07.172823   36356 start.go:128] duration metric: took 22.49806301s to createHost
	I0428 23:57:07.172850   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:07.174730   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.175018   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.175039   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.175144   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:07.175354   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:07.175490   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:07.175622   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:07.175762   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:57:07.175912   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0428 23:57:07.175931   36356 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 23:57:07.283490   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714348627.250495573
	
	I0428 23:57:07.283513   36356 fix.go:216] guest clock: 1714348627.250495573
	I0428 23:57:07.283522   36356 fix.go:229] Guest: 2024-04-28 23:57:07.250495573 +0000 UTC Remote: 2024-04-28 23:57:07.172835932 +0000 UTC m=+22.618724383 (delta=77.659641ms)
	I0428 23:57:07.283564   36356 fix.go:200] guest clock delta is within tolerance: 77.659641ms
	I0428 23:57:07.283580   36356 start.go:83] releasing machines lock for "ha-274394", held for 22.608884768s
	I0428 23:57:07.283601   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:07.283905   36356 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0428 23:57:07.286602   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.286951   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.286996   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.287152   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:07.287670   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:07.287826   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:07.287894   36356 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 23:57:07.287938   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:07.288038   36356 ssh_runner.go:195] Run: cat /version.json
	I0428 23:57:07.288059   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:07.290428   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.290560   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.290791   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.290816   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.290941   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:07.290945   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:07.291032   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:07.291107   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:07.291130   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:07.291298   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:07.291309   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:07.291469   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:07.291526   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:07.291680   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:07.367832   36356 ssh_runner.go:195] Run: systemctl --version
	I0428 23:57:07.389909   36356 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0428 23:57:07.553715   36356 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 23:57:07.560656   36356 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 23:57:07.560728   36356 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 23:57:07.580243   36356 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 23:57:07.580269   36356 start.go:494] detecting cgroup driver to use...
	I0428 23:57:07.580352   36356 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 23:57:07.597855   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 23:57:07.614447   36356 docker.go:217] disabling cri-docker service (if available) ...
	I0428 23:57:07.614507   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0428 23:57:07.630137   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0428 23:57:07.646659   36356 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0428 23:57:07.770067   36356 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0428 23:57:07.939717   36356 docker.go:233] disabling docker service ...
	I0428 23:57:07.939790   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0428 23:57:07.956532   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0428 23:57:07.970431   36356 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0428 23:57:08.090398   36356 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0428 23:57:08.208133   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0428 23:57:08.222537   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 23:57:08.243672   36356 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0428 23:57:08.243746   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.254760   36356 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0428 23:57:08.254827   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.265802   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.276580   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.287357   36356 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 23:57:08.298925   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.310180   36356 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.329330   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:57:08.340865   36356 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 23:57:08.350992   36356 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0428 23:57:08.351099   36356 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0428 23:57:08.365295   36356 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 23:57:08.375416   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:57:08.489130   36356 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0428 23:57:08.627175   36356 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0428 23:57:08.627252   36356 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0428 23:57:08.632940   36356 start.go:562] Will wait 60s for crictl version
	I0428 23:57:08.633035   36356 ssh_runner.go:195] Run: which crictl
	I0428 23:57:08.637726   36356 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 23:57:08.688300   36356 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0428 23:57:08.688414   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:57:08.718921   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:57:08.752722   36356 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0428 23:57:08.754156   36356 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0428 23:57:08.756290   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:08.756627   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:08.756654   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:08.756870   36356 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0428 23:57:08.761473   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:57:08.776539   36356 kubeadm.go:877] updating cluster {Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0428 23:57:08.776707   36356 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:57:08.776777   36356 ssh_runner.go:195] Run: sudo crictl images --output json
	I0428 23:57:08.813688   36356 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0428 23:57:08.813765   36356 ssh_runner.go:195] Run: which lz4
	I0428 23:57:08.818304   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0428 23:57:08.818436   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0428 23:57:08.823568   36356 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0428 23:57:08.823596   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0428 23:57:10.542324   36356 crio.go:462] duration metric: took 1.723924834s to copy over tarball
	I0428 23:57:10.542410   36356 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0428 23:57:13.112112   36356 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.569667818s)
	I0428 23:57:13.112142   36356 crio.go:469] duration metric: took 2.569786929s to extract the tarball
	I0428 23:57:13.112149   36356 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0428 23:57:13.152213   36356 ssh_runner.go:195] Run: sudo crictl images --output json
	I0428 23:57:13.202989   36356 crio.go:514] all images are preloaded for cri-o runtime.
	I0428 23:57:13.203014   36356 cache_images.go:84] Images are preloaded, skipping loading
	I0428 23:57:13.203023   36356 kubeadm.go:928] updating node { 192.168.39.237 8443 v1.30.0 crio true true} ...
	I0428 23:57:13.203155   36356 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-274394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 23:57:13.203239   36356 ssh_runner.go:195] Run: crio config
	I0428 23:57:13.256369   36356 cni.go:84] Creating CNI manager for ""
	I0428 23:57:13.256390   36356 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 23:57:13.256398   36356 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0428 23:57:13.256417   36356 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-274394 NodeName:ha-274394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0428 23:57:13.256553   36356 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-274394"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0428 23:57:13.256576   36356 kube-vip.go:111] generating kube-vip config ...
	I0428 23:57:13.256610   36356 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 23:57:13.276673   36356 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 23:57:13.276754   36356 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0428 23:57:13.276809   36356 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 23:57:13.289710   36356 binaries.go:44] Found k8s binaries, skipping transfer
	I0428 23:57:13.289767   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0428 23:57:13.302563   36356 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0428 23:57:13.323639   36356 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 23:57:13.343843   36356 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0428 23:57:13.363730   36356 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0428 23:57:13.384050   36356 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0428 23:57:13.388734   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:57:13.404951   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:57:13.547364   36356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:57:13.573283   36356 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394 for IP: 192.168.39.237
	I0428 23:57:13.573311   36356 certs.go:194] generating shared ca certs ...
	I0428 23:57:13.573326   36356 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:13.573483   36356 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0428 23:57:13.573525   36356 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0428 23:57:13.573535   36356 certs.go:256] generating profile certs ...
	I0428 23:57:13.573586   36356 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key
	I0428 23:57:13.573615   36356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt with IP's: []
	I0428 23:57:13.648288   36356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt ...
	I0428 23:57:13.648320   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt: {Name:mk32ae7dfd9f9a702d9db8b5322b2bf08a48e9fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:13.648491   36356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key ...
	I0428 23:57:13.648503   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key: {Name:mk3088da440752b13c33384f2e40d936a105f5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:13.648587   36356 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.c2322582
	I0428 23:57:13.648604   36356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.c2322582 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237 192.168.39.254]
	I0428 23:57:13.811379   36356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.c2322582 ...
	I0428 23:57:13.811407   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.c2322582: {Name:mkec37f4828f6d0d617a8817ad0cb65319dfc837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:13.811571   36356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.c2322582 ...
	I0428 23:57:13.811589   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.c2322582: {Name:mk3bb5ee50351cf2b6f1de8651fd8346e52caf40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:13.811694   36356 certs.go:381] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.c2322582 -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt
	I0428 23:57:13.811784   36356 certs.go:385] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.c2322582 -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key
	I0428 23:57:13.811836   36356 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key
	I0428 23:57:13.811856   36356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt with IP's: []
	I0428 23:57:14.027352   36356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt ...
	I0428 23:57:14.027379   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt: {Name:mkf2b62fd6e6eae93da857bc5cdce5be75eb4616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:14.027538   36356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key ...
	I0428 23:57:14.027550   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key: {Name:mkca8eb4eb8045104b93c56b349092a4368aa735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:14.027645   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 23:57:14.027664   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0428 23:57:14.027680   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 23:57:14.027695   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 23:57:14.027706   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 23:57:14.027720   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 23:57:14.027732   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 23:57:14.027743   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 23:57:14.027789   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0428 23:57:14.027820   36356 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0428 23:57:14.027829   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0428 23:57:14.027856   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0428 23:57:14.027877   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0428 23:57:14.027897   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0428 23:57:14.027934   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:57:14.027960   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:57:14.027974   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem -> /usr/share/ca-certificates/20727.pem
	I0428 23:57:14.027986   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /usr/share/ca-certificates/207272.pem
	I0428 23:57:14.028516   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 23:57:14.056477   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0428 23:57:14.083299   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 23:57:14.110784   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 23:57:14.138763   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0428 23:57:14.165666   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 23:57:14.193648   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 23:57:14.223287   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0428 23:57:14.253425   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 23:57:14.284140   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0428 23:57:14.314319   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0428 23:57:14.343119   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0428 23:57:14.364726   36356 ssh_runner.go:195] Run: openssl version
	I0428 23:57:14.375141   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 23:57:14.395577   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:57:14.401302   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:57:14.401384   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:57:14.410279   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 23:57:14.425476   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0428 23:57:14.438080   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0428 23:57:14.442930   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0428 23:57:14.442975   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0428 23:57:14.449030   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0428 23:57:14.461177   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0428 23:57:14.473485   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0428 23:57:14.478398   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0428 23:57:14.478444   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0428 23:57:14.484622   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 23:57:14.497144   36356 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 23:57:14.501613   36356 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 23:57:14.501664   36356 kubeadm.go:391] StartCluster: {Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:57:14.501755   36356 cri.go:56] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0428 23:57:14.501787   36356 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0428 23:57:14.548339   36356 cri.go:91] found id: ""
	I0428 23:57:14.548426   36356 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0428 23:57:14.560473   36356 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0428 23:57:14.572795   36356 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0428 23:57:14.584608   36356 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0428 23:57:14.584629   36356 kubeadm.go:156] found existing configuration files:
	
	I0428 23:57:14.584677   36356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0428 23:57:14.595801   36356 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0428 23:57:14.595872   36356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0428 23:57:14.608553   36356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0428 23:57:14.805450   36356 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0428 23:57:14.805508   36356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0428 23:57:14.816940   36356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0428 23:57:14.827476   36356 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0428 23:57:14.827533   36356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0428 23:57:14.838818   36356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0428 23:57:14.849316   36356 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0428 23:57:14.849374   36356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0428 23:57:14.860533   36356 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0428 23:57:14.970911   36356 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0428 23:57:14.971092   36356 kubeadm.go:309] [preflight] Running pre-flight checks
	I0428 23:57:15.140642   36356 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0428 23:57:15.140791   36356 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0428 23:57:15.140938   36356 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0428 23:57:15.402735   36356 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0428 23:57:15.581497   36356 out.go:204]   - Generating certificates and keys ...
	I0428 23:57:15.581620   36356 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0428 23:57:15.581696   36356 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0428 23:57:15.581791   36356 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0428 23:57:15.742471   36356 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0428 23:57:15.880482   36356 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0428 23:57:16.079408   36356 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0428 23:57:16.265709   36356 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0428 23:57:16.265929   36356 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-274394 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0428 23:57:16.377253   36356 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0428 23:57:16.377392   36356 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-274394 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0428 23:57:16.568167   36356 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0428 23:57:16.755727   36356 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0428 23:57:17.068472   36356 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0428 23:57:17.068836   36356 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0428 23:57:17.224359   36356 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0428 23:57:17.587671   36356 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0428 23:57:17.762573   36356 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0428 23:57:17.944221   36356 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0428 23:57:18.245238   36356 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0428 23:57:18.245785   36356 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0428 23:57:18.249440   36356 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0428 23:57:18.251518   36356 out.go:204]   - Booting up control plane ...
	I0428 23:57:18.251619   36356 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0428 23:57:18.251797   36356 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0428 23:57:18.252676   36356 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0428 23:57:18.269919   36356 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0428 23:57:18.270872   36356 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0428 23:57:18.270967   36356 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0428 23:57:18.409785   36356 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0428 23:57:18.409916   36356 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0428 23:57:19.408097   36356 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001619721s
	I0428 23:57:19.408195   36356 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0428 23:57:25.389394   36356 kubeadm.go:309] [api-check] The API server is healthy after 5.984864862s
	I0428 23:57:25.404982   36356 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0428 23:57:25.422266   36356 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0428 23:57:25.456777   36356 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0428 23:57:25.456979   36356 kubeadm.go:309] [mark-control-plane] Marking the node ha-274394 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0428 23:57:25.476230   36356 kubeadm.go:309] [bootstrap-token] Using token: p7cwcq.w3fzbiomge83y6x5
	I0428 23:57:25.477875   36356 out.go:204]   - Configuring RBAC rules ...
	I0428 23:57:25.478055   36356 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0428 23:57:25.485371   36356 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0428 23:57:25.497525   36356 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0428 23:57:25.502880   36356 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0428 23:57:25.507998   36356 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0428 23:57:25.515623   36356 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0428 23:57:25.797907   36356 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0428 23:57:26.231776   36356 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0428 23:57:26.796535   36356 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0428 23:57:26.797290   36356 kubeadm.go:309] 
	I0428 23:57:26.797374   36356 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0428 23:57:26.797385   36356 kubeadm.go:309] 
	I0428 23:57:26.797496   36356 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0428 23:57:26.797518   36356 kubeadm.go:309] 
	I0428 23:57:26.797583   36356 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0428 23:57:26.797662   36356 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0428 23:57:26.797737   36356 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0428 23:57:26.797770   36356 kubeadm.go:309] 
	I0428 23:57:26.797838   36356 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0428 23:57:26.797857   36356 kubeadm.go:309] 
	I0428 23:57:26.797939   36356 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0428 23:57:26.797950   36356 kubeadm.go:309] 
	I0428 23:57:26.798054   36356 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0428 23:57:26.798154   36356 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0428 23:57:26.798260   36356 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0428 23:57:26.798268   36356 kubeadm.go:309] 
	I0428 23:57:26.798355   36356 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0428 23:57:26.798421   36356 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0428 23:57:26.798427   36356 kubeadm.go:309] 
	I0428 23:57:26.798493   36356 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p7cwcq.w3fzbiomge83y6x5 \
	I0428 23:57:26.798582   36356 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 \
	I0428 23:57:26.798602   36356 kubeadm.go:309] 	--control-plane 
	I0428 23:57:26.798608   36356 kubeadm.go:309] 
	I0428 23:57:26.798682   36356 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0428 23:57:26.798693   36356 kubeadm.go:309] 
	I0428 23:57:26.798806   36356 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p7cwcq.w3fzbiomge83y6x5 \
	I0428 23:57:26.798954   36356 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 
	I0428 23:57:26.799378   36356 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0428 23:57:26.799459   36356 cni.go:84] Creating CNI manager for ""
	I0428 23:57:26.799472   36356 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0428 23:57:26.801505   36356 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0428 23:57:26.802707   36356 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0428 23:57:26.808925   36356 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0428 23:57:26.808943   36356 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0428 23:57:26.834378   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0428 23:57:27.201663   36356 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0428 23:57:27.201739   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:27.201741   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-274394 minikube.k8s.io/updated_at=2024_04_28T23_57_27_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-274394 minikube.k8s.io/primary=true
	I0428 23:57:27.218086   36356 ops.go:34] apiserver oom_adj: -16
	I0428 23:57:27.424076   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:27.925075   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:28.424356   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:28.924339   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:29.424513   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:29.924288   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:30.425076   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:30.924258   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:31.424755   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:31.924830   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:32.424739   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:32.924320   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:33.425053   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:33.924560   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:34.424815   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:34.924173   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:35.424207   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:35.924496   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:36.425172   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:36.924859   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:37.424869   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:37.924250   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:38.424267   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:38.924424   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0428 23:57:39.082874   36356 kubeadm.go:1107] duration metric: took 11.881195164s to wait for elevateKubeSystemPrivileges
	W0428 23:57:39.082918   36356 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0428 23:57:39.082928   36356 kubeadm.go:393] duration metric: took 24.581266215s to StartCluster
	I0428 23:57:39.082947   36356 settings.go:142] acquiring lock: {Name:mk4e6965347be51f4cd501030baea6b9cd2dbc9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:39.083032   36356 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:57:39.083795   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/kubeconfig: {Name:mk5412a370a0ddec304ff7697d6d137221e96742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:57:39.083984   36356 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:57:39.084009   36356 start.go:240] waiting for startup goroutines ...
	I0428 23:57:39.083994   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0428 23:57:39.084007   36356 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0428 23:57:39.084098   36356 addons.go:69] Setting storage-provisioner=true in profile "ha-274394"
	I0428 23:57:39.084105   36356 addons.go:69] Setting default-storageclass=true in profile "ha-274394"
	I0428 23:57:39.084136   36356 addons.go:234] Setting addon storage-provisioner=true in "ha-274394"
	I0428 23:57:39.084173   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:57:39.084193   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:57:39.084174   36356 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-274394"
	I0428 23:57:39.084599   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:39.084626   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:39.084651   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:39.084660   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:39.099465   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40107
	I0428 23:57:39.099483   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0428 23:57:39.099923   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:39.099924   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:39.100433   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:39.100461   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:39.100515   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:39.100531   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:39.100847   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:39.100870   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:39.101048   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:57:39.101480   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:39.101623   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:39.103124   36356 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:57:39.103379   36356 kapi.go:59] client config for ha-274394: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt", KeyFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key", CAFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0428 23:57:39.103792   36356 cert_rotation.go:137] Starting client certificate rotation controller
	I0428 23:57:39.103956   36356 addons.go:234] Setting addon default-storageclass=true in "ha-274394"
	I0428 23:57:39.103995   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:57:39.104250   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:39.104294   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:39.117517   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I0428 23:57:39.117990   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:39.118551   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:39.118583   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:39.118929   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:39.119159   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:57:39.119482   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0428 23:57:39.119880   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:39.120375   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:39.120400   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:39.120739   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:39.120952   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:39.122980   36356 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0428 23:57:39.121312   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:39.124387   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:39.124487   36356 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 23:57:39.124512   36356 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0428 23:57:39.124529   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:39.127583   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:39.128028   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:39.128055   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:39.128209   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:39.128392   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:39.128545   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:39.128689   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:39.140041   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43663
	I0428 23:57:39.140489   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:39.140944   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:39.140962   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:39.141257   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:39.141435   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:57:39.143020   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:57:39.143276   36356 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0428 23:57:39.143289   36356 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0428 23:57:39.143301   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:57:39.146287   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:39.146694   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:57:39.146732   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:57:39.146972   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:57:39.147160   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:57:39.147331   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:57:39.147464   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:57:39.275667   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0428 23:57:39.307241   36356 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0428 23:57:39.452854   36356 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0428 23:57:39.886409   36356 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0428 23:57:40.144028   36356 main.go:141] libmachine: Making call to close driver server
	I0428 23:57:40.144057   36356 main.go:141] libmachine: (ha-274394) Calling .Close
	I0428 23:57:40.144093   36356 main.go:141] libmachine: Making call to close driver server
	I0428 23:57:40.144128   36356 main.go:141] libmachine: (ha-274394) Calling .Close
	I0428 23:57:40.144337   36356 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:57:40.144352   36356 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:57:40.144360   36356 main.go:141] libmachine: Making call to close driver server
	I0428 23:57:40.144366   36356 main.go:141] libmachine: (ha-274394) Calling .Close
	I0428 23:57:40.144465   36356 main.go:141] libmachine: (ha-274394) DBG | Closing plugin on server side
	I0428 23:57:40.144477   36356 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:57:40.144503   36356 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:57:40.144521   36356 main.go:141] libmachine: Making call to close driver server
	I0428 23:57:40.144529   36356 main.go:141] libmachine: (ha-274394) Calling .Close
	I0428 23:57:40.144593   36356 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:57:40.144632   36356 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:57:40.144616   36356 main.go:141] libmachine: (ha-274394) DBG | Closing plugin on server side
	I0428 23:57:40.144722   36356 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:57:40.144737   36356 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:57:40.144861   36356 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0428 23:57:40.144875   36356 round_trippers.go:469] Request Headers:
	I0428 23:57:40.144885   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:57:40.144889   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:57:40.159830   36356 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0428 23:57:40.160397   36356 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0428 23:57:40.160413   36356 round_trippers.go:469] Request Headers:
	I0428 23:57:40.160420   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:57:40.160426   36356 round_trippers.go:473]     Content-Type: application/json
	I0428 23:57:40.160431   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:57:40.164030   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:57:40.164307   36356 main.go:141] libmachine: Making call to close driver server
	I0428 23:57:40.164329   36356 main.go:141] libmachine: (ha-274394) Calling .Close
	I0428 23:57:40.164622   36356 main.go:141] libmachine: Successfully made call to close driver server
	I0428 23:57:40.164645   36356 main.go:141] libmachine: Making call to close connection to plugin binary
	I0428 23:57:40.166553   36356 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0428 23:57:40.168054   36356 addons.go:505] duration metric: took 1.084044252s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0428 23:57:40.168092   36356 start.go:245] waiting for cluster config update ...
	I0428 23:57:40.168102   36356 start.go:254] writing updated cluster config ...
	I0428 23:57:40.170126   36356 out.go:177] 
	I0428 23:57:40.171839   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:57:40.171906   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:57:40.173902   36356 out.go:177] * Starting "ha-274394-m02" control-plane node in "ha-274394" cluster
	I0428 23:57:40.175173   36356 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:57:40.175207   36356 cache.go:56] Caching tarball of preloaded images
	I0428 23:57:40.175324   36356 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0428 23:57:40.175343   36356 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0428 23:57:40.175453   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:57:40.175692   36356 start.go:360] acquireMachinesLock for ha-274394-m02: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 23:57:40.175759   36356 start.go:364] duration metric: took 36.777µs to acquireMachinesLock for "ha-274394-m02"
	I0428 23:57:40.175784   36356 start.go:93] Provisioning new machine with config: &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:57:40.175893   36356 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0428 23:57:40.177596   36356 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 23:57:40.177686   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:57:40.177716   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:57:40.192465   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
	I0428 23:57:40.192869   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:57:40.193339   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:57:40.193360   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:57:40.193660   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:57:40.193851   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetMachineName
	I0428 23:57:40.194005   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:57:40.194206   36356 start.go:159] libmachine.API.Create for "ha-274394" (driver="kvm2")
	I0428 23:57:40.194237   36356 client.go:168] LocalClient.Create starting
	I0428 23:57:40.194281   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem
	I0428 23:57:40.194325   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:57:40.194343   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:57:40.194395   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem
	I0428 23:57:40.194413   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:57:40.194423   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:57:40.194437   36356 main.go:141] libmachine: Running pre-create checks...
	I0428 23:57:40.194445   36356 main.go:141] libmachine: (ha-274394-m02) Calling .PreCreateCheck
	I0428 23:57:40.194610   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetConfigRaw
	I0428 23:57:40.194952   36356 main.go:141] libmachine: Creating machine...
	I0428 23:57:40.194964   36356 main.go:141] libmachine: (ha-274394-m02) Calling .Create
	I0428 23:57:40.195082   36356 main.go:141] libmachine: (ha-274394-m02) Creating KVM machine...
	I0428 23:57:40.196448   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found existing default KVM network
	I0428 23:57:40.196567   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found existing private KVM network mk-ha-274394
	I0428 23:57:40.196728   36356 main.go:141] libmachine: (ha-274394-m02) Setting up store path in /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02 ...
	I0428 23:57:40.196753   36356 main.go:141] libmachine: (ha-274394-m02) Building disk image from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0428 23:57:40.196818   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:40.196708   36756 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:57:40.196940   36356 main.go:141] libmachine: (ha-274394-m02) Downloading /home/jenkins/minikube-integration/17977-13393/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 23:57:40.430082   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:40.429932   36756 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa...
	I0428 23:57:40.583373   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:40.583223   36756 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/ha-274394-m02.rawdisk...
	I0428 23:57:40.583418   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Writing magic tar header
	I0428 23:57:40.583434   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Writing SSH key tar header
	I0428 23:57:40.583447   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:40.583383   36756 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02 ...
	I0428 23:57:40.583532   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02
	I0428 23:57:40.583555   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02 (perms=drwx------)
	I0428 23:57:40.583568   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines
	I0428 23:57:40.583584   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines (perms=drwxr-xr-x)
	I0428 23:57:40.583603   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:57:40.583619   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393
	I0428 23:57:40.583646   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube (perms=drwxr-xr-x)
	I0428 23:57:40.583660   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393 (perms=drwxrwxr-x)
	I0428 23:57:40.583673   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0428 23:57:40.583685   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home/jenkins
	I0428 23:57:40.583696   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0428 23:57:40.583708   36356 main.go:141] libmachine: (ha-274394-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0428 23:57:40.583716   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Checking permissions on dir: /home
	I0428 23:57:40.583733   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Skipping /home - not owner
	I0428 23:57:40.583747   36356 main.go:141] libmachine: (ha-274394-m02) Creating domain...
	I0428 23:57:40.584643   36356 main.go:141] libmachine: (ha-274394-m02) define libvirt domain using xml: 
	I0428 23:57:40.584666   36356 main.go:141] libmachine: (ha-274394-m02) <domain type='kvm'>
	I0428 23:57:40.584679   36356 main.go:141] libmachine: (ha-274394-m02)   <name>ha-274394-m02</name>
	I0428 23:57:40.584692   36356 main.go:141] libmachine: (ha-274394-m02)   <memory unit='MiB'>2200</memory>
	I0428 23:57:40.584703   36356 main.go:141] libmachine: (ha-274394-m02)   <vcpu>2</vcpu>
	I0428 23:57:40.584718   36356 main.go:141] libmachine: (ha-274394-m02)   <features>
	I0428 23:57:40.584731   36356 main.go:141] libmachine: (ha-274394-m02)     <acpi/>
	I0428 23:57:40.584741   36356 main.go:141] libmachine: (ha-274394-m02)     <apic/>
	I0428 23:57:40.584750   36356 main.go:141] libmachine: (ha-274394-m02)     <pae/>
	I0428 23:57:40.584760   36356 main.go:141] libmachine: (ha-274394-m02)     
	I0428 23:57:40.584777   36356 main.go:141] libmachine: (ha-274394-m02)   </features>
	I0428 23:57:40.584788   36356 main.go:141] libmachine: (ha-274394-m02)   <cpu mode='host-passthrough'>
	I0428 23:57:40.584800   36356 main.go:141] libmachine: (ha-274394-m02)   
	I0428 23:57:40.584812   36356 main.go:141] libmachine: (ha-274394-m02)   </cpu>
	I0428 23:57:40.584825   36356 main.go:141] libmachine: (ha-274394-m02)   <os>
	I0428 23:57:40.584837   36356 main.go:141] libmachine: (ha-274394-m02)     <type>hvm</type>
	I0428 23:57:40.584848   36356 main.go:141] libmachine: (ha-274394-m02)     <boot dev='cdrom'/>
	I0428 23:57:40.584858   36356 main.go:141] libmachine: (ha-274394-m02)     <boot dev='hd'/>
	I0428 23:57:40.584869   36356 main.go:141] libmachine: (ha-274394-m02)     <bootmenu enable='no'/>
	I0428 23:57:40.584881   36356 main.go:141] libmachine: (ha-274394-m02)   </os>
	I0428 23:57:40.584889   36356 main.go:141] libmachine: (ha-274394-m02)   <devices>
	I0428 23:57:40.584903   36356 main.go:141] libmachine: (ha-274394-m02)     <disk type='file' device='cdrom'>
	I0428 23:57:40.584923   36356 main.go:141] libmachine: (ha-274394-m02)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/boot2docker.iso'/>
	I0428 23:57:40.584937   36356 main.go:141] libmachine: (ha-274394-m02)       <target dev='hdc' bus='scsi'/>
	I0428 23:57:40.584947   36356 main.go:141] libmachine: (ha-274394-m02)       <readonly/>
	I0428 23:57:40.584955   36356 main.go:141] libmachine: (ha-274394-m02)     </disk>
	I0428 23:57:40.584965   36356 main.go:141] libmachine: (ha-274394-m02)     <disk type='file' device='disk'>
	I0428 23:57:40.584977   36356 main.go:141] libmachine: (ha-274394-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0428 23:57:40.584989   36356 main.go:141] libmachine: (ha-274394-m02)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/ha-274394-m02.rawdisk'/>
	I0428 23:57:40.585019   36356 main.go:141] libmachine: (ha-274394-m02)       <target dev='hda' bus='virtio'/>
	I0428 23:57:40.585048   36356 main.go:141] libmachine: (ha-274394-m02)     </disk>
	I0428 23:57:40.585062   36356 main.go:141] libmachine: (ha-274394-m02)     <interface type='network'>
	I0428 23:57:40.585078   36356 main.go:141] libmachine: (ha-274394-m02)       <source network='mk-ha-274394'/>
	I0428 23:57:40.585090   36356 main.go:141] libmachine: (ha-274394-m02)       <model type='virtio'/>
	I0428 23:57:40.585100   36356 main.go:141] libmachine: (ha-274394-m02)     </interface>
	I0428 23:57:40.585112   36356 main.go:141] libmachine: (ha-274394-m02)     <interface type='network'>
	I0428 23:57:40.585122   36356 main.go:141] libmachine: (ha-274394-m02)       <source network='default'/>
	I0428 23:57:40.585133   36356 main.go:141] libmachine: (ha-274394-m02)       <model type='virtio'/>
	I0428 23:57:40.585143   36356 main.go:141] libmachine: (ha-274394-m02)     </interface>
	I0428 23:57:40.585193   36356 main.go:141] libmachine: (ha-274394-m02)     <serial type='pty'>
	I0428 23:57:40.585213   36356 main.go:141] libmachine: (ha-274394-m02)       <target port='0'/>
	I0428 23:57:40.585230   36356 main.go:141] libmachine: (ha-274394-m02)     </serial>
	I0428 23:57:40.585246   36356 main.go:141] libmachine: (ha-274394-m02)     <console type='pty'>
	I0428 23:57:40.585275   36356 main.go:141] libmachine: (ha-274394-m02)       <target type='serial' port='0'/>
	I0428 23:57:40.585293   36356 main.go:141] libmachine: (ha-274394-m02)     </console>
	I0428 23:57:40.585302   36356 main.go:141] libmachine: (ha-274394-m02)     <rng model='virtio'>
	I0428 23:57:40.585308   36356 main.go:141] libmachine: (ha-274394-m02)       <backend model='random'>/dev/random</backend>
	I0428 23:57:40.585314   36356 main.go:141] libmachine: (ha-274394-m02)     </rng>
	I0428 23:57:40.585321   36356 main.go:141] libmachine: (ha-274394-m02)     
	I0428 23:57:40.585326   36356 main.go:141] libmachine: (ha-274394-m02)     
	I0428 23:57:40.585333   36356 main.go:141] libmachine: (ha-274394-m02)   </devices>
	I0428 23:57:40.585338   36356 main.go:141] libmachine: (ha-274394-m02) </domain>
	I0428 23:57:40.585349   36356 main.go:141] libmachine: (ha-274394-m02) 
	I0428 23:57:40.591951   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:29:fa:a1 in network default
	I0428 23:57:40.592539   36356 main.go:141] libmachine: (ha-274394-m02) Ensuring networks are active...
	I0428 23:57:40.592572   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:40.593208   36356 main.go:141] libmachine: (ha-274394-m02) Ensuring network default is active
	I0428 23:57:40.593513   36356 main.go:141] libmachine: (ha-274394-m02) Ensuring network mk-ha-274394 is active
	I0428 23:57:40.593859   36356 main.go:141] libmachine: (ha-274394-m02) Getting domain xml...
	I0428 23:57:40.594509   36356 main.go:141] libmachine: (ha-274394-m02) Creating domain...
	I0428 23:57:41.836657   36356 main.go:141] libmachine: (ha-274394-m02) Waiting to get IP...
	I0428 23:57:41.837571   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:41.838045   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:41.838120   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:41.838015   36756 retry.go:31] will retry after 263.733241ms: waiting for machine to come up
	I0428 23:57:42.105185   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:42.105687   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:42.105724   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:42.105649   36756 retry.go:31] will retry after 331.1126ms: waiting for machine to come up
	I0428 23:57:42.438029   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:42.438463   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:42.438490   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:42.438436   36756 retry.go:31] will retry after 446.032628ms: waiting for machine to come up
	I0428 23:57:42.886123   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:42.886522   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:42.886551   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:42.886483   36756 retry.go:31] will retry after 461.928323ms: waiting for machine to come up
	I0428 23:57:43.350246   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:43.350746   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:43.350773   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:43.350696   36756 retry.go:31] will retry after 703.683282ms: waiting for machine to come up
	I0428 23:57:44.055920   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:44.056329   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:44.056361   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:44.056286   36756 retry.go:31] will retry after 903.640049ms: waiting for machine to come up
	I0428 23:57:44.961160   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:44.961635   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:44.961664   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:44.961581   36756 retry.go:31] will retry after 931.278913ms: waiting for machine to come up
	I0428 23:57:45.894066   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:45.894506   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:45.894535   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:45.894451   36756 retry.go:31] will retry after 1.279366183s: waiting for machine to come up
	I0428 23:57:47.174982   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:47.175538   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:47.175570   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:47.175475   36756 retry.go:31] will retry after 1.506197273s: waiting for machine to come up
	I0428 23:57:48.683913   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:48.684413   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:48.684452   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:48.684371   36756 retry.go:31] will retry after 2.323617854s: waiting for machine to come up
	I0428 23:57:51.009605   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:51.010052   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:51.010079   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:51.010011   36756 retry.go:31] will retry after 2.511993371s: waiting for machine to come up
	I0428 23:57:53.524618   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:53.524963   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:53.524989   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:53.524930   36756 retry.go:31] will retry after 2.984005541s: waiting for machine to come up
	I0428 23:57:56.510802   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:57:56.511159   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:57:56.511208   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:57:56.511109   36756 retry.go:31] will retry after 3.975363933s: waiting for machine to come up
	I0428 23:58:00.488249   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:00.488659   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find current IP address of domain ha-274394-m02 in network mk-ha-274394
	I0428 23:58:00.488699   36356 main.go:141] libmachine: (ha-274394-m02) DBG | I0428 23:58:00.488635   36756 retry.go:31] will retry after 4.708905436s: waiting for machine to come up
	I0428 23:58:05.199518   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:05.200038   36356 main.go:141] libmachine: (ha-274394-m02) Found IP for machine: 192.168.39.43
	I0428 23:58:05.200069   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has current primary IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:05.200075   36356 main.go:141] libmachine: (ha-274394-m02) Reserving static IP address...
	I0428 23:58:05.200401   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find host DHCP lease matching {name: "ha-274394-m02", mac: "52:54:00:94:ad:64", ip: "192.168.39.43"} in network mk-ha-274394
	I0428 23:58:05.271102   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Getting to WaitForSSH function...
	I0428 23:58:05.271136   36356 main.go:141] libmachine: (ha-274394-m02) Reserved static IP address: 192.168.39.43
	I0428 23:58:05.271154   36356 main.go:141] libmachine: (ha-274394-m02) Waiting for SSH to be available...
	I0428 23:58:05.273658   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:05.274071   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394
	I0428 23:58:05.274110   36356 main.go:141] libmachine: (ha-274394-m02) DBG | unable to find defined IP address of network mk-ha-274394 interface with MAC address 52:54:00:94:ad:64
	I0428 23:58:05.274190   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Using SSH client type: external
	I0428 23:58:05.274217   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa (-rw-------)
	I0428 23:58:05.274244   36356 main.go:141] libmachine: (ha-274394-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0428 23:58:05.274262   36356 main.go:141] libmachine: (ha-274394-m02) DBG | About to run SSH command:
	I0428 23:58:05.274279   36356 main.go:141] libmachine: (ha-274394-m02) DBG | exit 0
	I0428 23:58:05.277779   36356 main.go:141] libmachine: (ha-274394-m02) DBG | SSH cmd err, output: exit status 255: 
	I0428 23:58:05.277800   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0428 23:58:05.277808   36356 main.go:141] libmachine: (ha-274394-m02) DBG | command : exit 0
	I0428 23:58:05.277813   36356 main.go:141] libmachine: (ha-274394-m02) DBG | err     : exit status 255
	I0428 23:58:05.277834   36356 main.go:141] libmachine: (ha-274394-m02) DBG | output  : 
	I0428 23:58:08.279936   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Getting to WaitForSSH function...
	I0428 23:58:08.282287   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.282606   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.282638   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.282767   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Using SSH client type: external
	I0428 23:58:08.282790   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa (-rw-------)
	I0428 23:58:08.282817   36356 main.go:141] libmachine: (ha-274394-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0428 23:58:08.282831   36356 main.go:141] libmachine: (ha-274394-m02) DBG | About to run SSH command:
	I0428 23:58:08.282847   36356 main.go:141] libmachine: (ha-274394-m02) DBG | exit 0
	I0428 23:58:08.406493   36356 main.go:141] libmachine: (ha-274394-m02) DBG | SSH cmd err, output: <nil>: 
	I0428 23:58:08.406751   36356 main.go:141] libmachine: (ha-274394-m02) KVM machine creation complete!
	I0428 23:58:08.407023   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetConfigRaw
	I0428 23:58:08.407546   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:08.407752   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:08.407917   36356 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0428 23:58:08.407949   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0428 23:58:08.409148   36356 main.go:141] libmachine: Detecting operating system of created instance...
	I0428 23:58:08.409163   36356 main.go:141] libmachine: Waiting for SSH to be available...
	I0428 23:58:08.409170   36356 main.go:141] libmachine: Getting to WaitForSSH function...
	I0428 23:58:08.409176   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:08.411283   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.411639   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.411666   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.411790   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:08.411958   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.412074   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.412205   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:08.412341   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:08.412556   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:08.412567   36356 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0428 23:58:08.517856   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:58:08.517876   36356 main.go:141] libmachine: Detecting the provisioner...
	I0428 23:58:08.517884   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:08.520659   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.521074   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.521104   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.521249   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:08.521452   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.521595   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.521718   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:08.521891   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:08.522108   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:08.522122   36356 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0428 23:58:08.623755   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0428 23:58:08.623844   36356 main.go:141] libmachine: found compatible host: buildroot
	I0428 23:58:08.623859   36356 main.go:141] libmachine: Provisioning with buildroot...
	I0428 23:58:08.623869   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetMachineName
	I0428 23:58:08.624132   36356 buildroot.go:166] provisioning hostname "ha-274394-m02"
	I0428 23:58:08.624159   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetMachineName
	I0428 23:58:08.624360   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:08.626758   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.627168   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.627207   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.627320   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:08.627519   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.627679   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.627799   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:08.627986   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:08.628168   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:08.628185   36356 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-274394-m02 && echo "ha-274394-m02" | sudo tee /etc/hostname
	I0428 23:58:08.748721   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-274394-m02
	
	I0428 23:58:08.748751   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:08.751328   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.751725   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.751754   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.751921   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:08.752118   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.752289   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:08.752435   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:08.752591   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:08.752746   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:08.752761   36356 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-274394-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-274394-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-274394-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 23:58:08.865693   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:58:08.865735   36356 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0428 23:58:08.865752   36356 buildroot.go:174] setting up certificates
	I0428 23:58:08.865761   36356 provision.go:84] configureAuth start
	I0428 23:58:08.865770   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetMachineName
	I0428 23:58:08.866040   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0428 23:58:08.868779   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.869186   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.869215   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.869353   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:08.871473   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.871800   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:08.871832   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:08.871946   36356 provision.go:143] copyHostCerts
	I0428 23:58:08.871975   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:58:08.872008   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0428 23:58:08.872018   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:58:08.872094   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0428 23:58:08.872213   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:58:08.872239   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0428 23:58:08.872244   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:58:08.872278   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0428 23:58:08.872365   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:58:08.872389   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0428 23:58:08.872398   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:58:08.872430   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0428 23:58:08.872508   36356 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.ha-274394-m02 san=[127.0.0.1 192.168.39.43 ha-274394-m02 localhost minikube]
	I0428 23:58:09.052110   36356 provision.go:177] copyRemoteCerts
	I0428 23:58:09.052164   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 23:58:09.052184   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:09.054860   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.055216   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.055240   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.055399   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.055567   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.055717   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.055858   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	I0428 23:58:09.137022   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0428 23:58:09.137100   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 23:58:09.167033   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0428 23:58:09.167092   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 23:58:09.196003   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0428 23:58:09.196052   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 23:58:09.225865   36356 provision.go:87] duration metric: took 360.094398ms to configureAuth
	I0428 23:58:09.225900   36356 buildroot.go:189] setting minikube options for container-runtime
	I0428 23:58:09.226133   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:58:09.226208   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:09.228933   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.229315   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.229339   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.229568   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.229766   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.229900   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.230040   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.230170   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:09.230388   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:09.230411   36356 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0428 23:58:09.507505   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0428 23:58:09.507533   36356 main.go:141] libmachine: Checking connection to Docker...
	I0428 23:58:09.507544   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetURL
	I0428 23:58:09.508981   36356 main.go:141] libmachine: (ha-274394-m02) DBG | Using libvirt version 6000000
	I0428 23:58:09.511365   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.511820   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.511847   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.511983   36356 main.go:141] libmachine: Docker is up and running!
	I0428 23:58:09.511995   36356 main.go:141] libmachine: Reticulating splines...
	I0428 23:58:09.512002   36356 client.go:171] duration metric: took 29.317754136s to LocalClient.Create
	I0428 23:58:09.512028   36356 start.go:167] duration metric: took 29.317822967s to libmachine.API.Create "ha-274394"
	I0428 23:58:09.512041   36356 start.go:293] postStartSetup for "ha-274394-m02" (driver="kvm2")
	I0428 23:58:09.512058   36356 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 23:58:09.512081   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:09.512308   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 23:58:09.512333   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:09.514486   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.514786   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.514819   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.514890   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.515065   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.515222   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.515394   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	I0428 23:58:09.598106   36356 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 23:58:09.603534   36356 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 23:58:09.605400   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0428 23:58:09.605465   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0428 23:58:09.605532   36356 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0428 23:58:09.605541   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /etc/ssl/certs/207272.pem
	I0428 23:58:09.605627   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 23:58:09.616520   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:58:09.648817   36356 start.go:296] duration metric: took 136.751105ms for postStartSetup
	I0428 23:58:09.648864   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetConfigRaw
	I0428 23:58:09.649443   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0428 23:58:09.651782   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.652097   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.652145   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.652400   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:58:09.652581   36356 start.go:128] duration metric: took 29.476676023s to createHost
	I0428 23:58:09.652603   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:09.654816   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.655121   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.655141   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.655311   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.655499   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.655654   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.655785   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.655923   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:58:09.656090   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0428 23:58:09.656101   36356 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 23:58:09.763749   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714348689.739838626
	
	I0428 23:58:09.763772   36356 fix.go:216] guest clock: 1714348689.739838626
	I0428 23:58:09.763782   36356 fix.go:229] Guest: 2024-04-28 23:58:09.739838626 +0000 UTC Remote: 2024-04-28 23:58:09.652593063 +0000 UTC m=+85.098481504 (delta=87.245563ms)
	I0428 23:58:09.763801   36356 fix.go:200] guest clock delta is within tolerance: 87.245563ms
	I0428 23:58:09.763808   36356 start.go:83] releasing machines lock for "ha-274394-m02", held for 29.58803473s
	I0428 23:58:09.763831   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:09.764088   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0428 23:58:09.766409   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.766722   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.766751   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.769042   36356 out.go:177] * Found network options:
	I0428 23:58:09.770388   36356 out.go:177]   - NO_PROXY=192.168.39.237
	W0428 23:58:09.771614   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 23:58:09.771670   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:09.772270   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:09.772475   36356 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0428 23:58:09.772539   36356 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 23:58:09.772591   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	W0428 23:58:09.772706   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 23:58:09.772781   36356 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0428 23:58:09.772803   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0428 23:58:09.775069   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.775442   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.775474   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.775498   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.775608   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.775788   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.775865   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:09.775888   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:09.775969   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.776049   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0428 23:58:09.776112   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	I0428 23:58:09.776192   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0428 23:58:09.776354   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0428 23:58:09.776515   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	I0428 23:58:10.017919   36356 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 23:58:10.025250   36356 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 23:58:10.025319   36356 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 23:58:10.042207   36356 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 23:58:10.042227   36356 start.go:494] detecting cgroup driver to use...
	I0428 23:58:10.042298   36356 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 23:58:10.060095   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 23:58:10.074396   36356 docker.go:217] disabling cri-docker service (if available) ...
	I0428 23:58:10.074438   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0428 23:58:10.089348   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0428 23:58:10.105801   36356 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0428 23:58:10.231914   36356 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0428 23:58:10.370359   36356 docker.go:233] disabling docker service ...
	I0428 23:58:10.370433   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0428 23:58:10.387029   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0428 23:58:10.401713   36356 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0428 23:58:10.545671   36356 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0428 23:58:10.673835   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0428 23:58:10.690495   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 23:58:10.713136   36356 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0428 23:58:10.713195   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.724228   36356 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0428 23:58:10.724289   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.734841   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.745343   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.755951   36356 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 23:58:10.769464   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.780518   36356 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.800846   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:58:10.813387   36356 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 23:58:10.824342   36356 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0428 23:58:10.824386   36356 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0428 23:58:10.840504   36356 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 23:58:10.850816   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:58:11.002082   36356 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0428 23:58:11.150506   36356 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0428 23:58:11.150580   36356 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0428 23:58:11.155694   36356 start.go:562] Will wait 60s for crictl version
	I0428 23:58:11.155737   36356 ssh_runner.go:195] Run: which crictl
	I0428 23:58:11.159794   36356 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 23:58:11.198604   36356 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0428 23:58:11.198662   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:58:11.227554   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:58:11.259462   36356 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0428 23:58:11.261048   36356 out.go:177]   - env NO_PROXY=192.168.39.237
	I0428 23:58:11.262197   36356 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0428 23:58:11.264686   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:11.265028   36356 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:56 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0428 23:58:11.265066   36356 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0428 23:58:11.265314   36356 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0428 23:58:11.269635   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:58:11.284122   36356 mustload.go:65] Loading cluster: ha-274394
	I0428 23:58:11.284320   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:58:11.284574   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:58:11.284606   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:58:11.299185   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43987
	I0428 23:58:11.299552   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:58:11.300015   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:58:11.300035   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:58:11.300322   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:58:11.300512   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:58:11.302241   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:58:11.302540   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:58:11.302569   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:58:11.316673   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0428 23:58:11.317081   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:58:11.317581   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:58:11.317603   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:58:11.317957   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:58:11.318147   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:58:11.318306   36356 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394 for IP: 192.168.39.43
	I0428 23:58:11.318321   36356 certs.go:194] generating shared ca certs ...
	I0428 23:58:11.318343   36356 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:58:11.318474   36356 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0428 23:58:11.318509   36356 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0428 23:58:11.318518   36356 certs.go:256] generating profile certs ...
	I0428 23:58:11.318589   36356 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key
	I0428 23:58:11.318612   36356 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.7e238c0c
	I0428 23:58:11.318627   36356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.7e238c0c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237 192.168.39.43 192.168.39.254]
	I0428 23:58:11.545721   36356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.7e238c0c ...
	I0428 23:58:11.545748   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.7e238c0c: {Name:mkeed2aa96bd12faaef131331a07f70de364149a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:58:11.545910   36356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.7e238c0c ...
	I0428 23:58:11.545924   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.7e238c0c: {Name:mk7099ae4bf57427dc8efa8eca1c99f9dfbcfc1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:58:11.545987   36356 certs.go:381] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.7e238c0c -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt
	I0428 23:58:11.546128   36356 certs.go:385] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.7e238c0c -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key
	I0428 23:58:11.546251   36356 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key
	I0428 23:58:11.546266   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 23:58:11.546283   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0428 23:58:11.546302   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 23:58:11.546314   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 23:58:11.546327   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 23:58:11.546339   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 23:58:11.546356   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 23:58:11.546367   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 23:58:11.546440   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0428 23:58:11.546474   36356 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0428 23:58:11.546484   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0428 23:58:11.546515   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0428 23:58:11.546544   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0428 23:58:11.546575   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0428 23:58:11.546612   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:58:11.546640   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /usr/share/ca-certificates/207272.pem
	I0428 23:58:11.546660   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:58:11.546673   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem -> /usr/share/ca-certificates/20727.pem
	I0428 23:58:11.546701   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:58:11.549269   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:58:11.549627   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:58:11.549651   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:58:11.549864   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:58:11.550078   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:58:11.550246   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:58:11.550386   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:58:11.626257   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0428 23:58:11.633493   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0428 23:58:11.648763   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0428 23:58:11.653913   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0428 23:58:11.666682   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0428 23:58:11.671203   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0428 23:58:11.683465   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0428 23:58:11.688416   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0428 23:58:11.700908   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0428 23:58:11.705691   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0428 23:58:11.718671   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0428 23:58:11.723458   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0428 23:58:11.736255   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 23:58:11.765294   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0428 23:58:11.790622   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 23:58:11.815237   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 23:58:11.840522   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0428 23:58:11.866184   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 23:58:11.892486   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 23:58:11.919387   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0428 23:58:11.945021   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0428 23:58:11.971444   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 23:58:11.998626   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0428 23:58:12.027890   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0428 23:58:12.047360   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0428 23:58:12.066886   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0428 23:58:12.085348   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0428 23:58:12.103655   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0428 23:58:12.122468   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0428 23:58:12.140788   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0428 23:58:12.159242   36356 ssh_runner.go:195] Run: openssl version
	I0428 23:58:12.165224   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0428 23:58:12.177844   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0428 23:58:12.183316   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0428 23:58:12.183365   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0428 23:58:12.190660   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 23:58:12.204445   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 23:58:12.218304   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:58:12.223358   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:58:12.223409   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:58:12.229577   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 23:58:12.243231   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0428 23:58:12.256708   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0428 23:58:12.261981   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0428 23:58:12.262039   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0428 23:58:12.268393   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0428 23:58:12.280239   36356 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 23:58:12.284658   36356 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 23:58:12.284714   36356 kubeadm.go:928] updating node {m02 192.168.39.43 8443 v1.30.0 crio true true} ...
	I0428 23:58:12.284819   36356 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-274394-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 23:58:12.284857   36356 kube-vip.go:111] generating kube-vip config ...
	I0428 23:58:12.284892   36356 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 23:58:12.305290   36356 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 23:58:12.305341   36356 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0428 23:58:12.305383   36356 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 23:58:12.316898   36356 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0428 23:58:12.316947   36356 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0428 23:58:12.329307   36356 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0428 23:58:12.329324   36356 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0428 23:58:12.329342   36356 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0428 23:58:12.329329   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0428 23:58:12.329504   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0428 23:58:12.335405   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0428 23:58:12.335438   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0428 23:58:14.045229   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0428 23:58:14.045312   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0428 23:58:14.051404   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0428 23:58:14.051439   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0428 23:58:15.812706   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 23:58:15.829608   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0428 23:58:15.829706   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0428 23:58:15.834283   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0428 23:58:15.834318   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0428 23:58:16.307677   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0428 23:58:16.319127   36356 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0428 23:58:16.341843   36356 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 23:58:16.362469   36356 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0428 23:58:16.382263   36356 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0428 23:58:16.386704   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:58:16.399847   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:58:16.542357   36356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:58:16.562649   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:58:16.563136   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:58:16.563183   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:58:16.578899   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0428 23:58:16.579324   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:58:16.579790   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:58:16.579815   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:58:16.580113   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:58:16.580286   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:58:16.580432   36356 start.go:316] joinCluster: &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:58:16.580525   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0428 23:58:16.580547   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:58:16.583320   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:58:16.583742   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:58:16.583771   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:58:16.583929   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:58:16.584086   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:58:16.584266   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:58:16.584415   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:58:16.761399   36356 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:58:16.761453   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kta4yl.dkqb9qr4g4gf2lc7 --discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-274394-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443"
	I0428 23:58:38.387908   36356 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kta4yl.dkqb9qr4g4gf2lc7 --discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-274394-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443": (21.626432622s)
	I0428 23:58:38.387953   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0428 23:58:39.010776   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-274394-m02 minikube.k8s.io/updated_at=2024_04_28T23_58_39_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-274394 minikube.k8s.io/primary=false
	I0428 23:58:39.142412   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-274394-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0428 23:58:39.293444   36356 start.go:318] duration metric: took 22.713007972s to joinCluster
	I0428 23:58:39.293513   36356 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:58:39.295333   36356 out.go:177] * Verifying Kubernetes components...
	I0428 23:58:39.293856   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:58:39.296894   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:58:39.590067   36356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:58:39.653930   36356 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:58:39.654223   36356 kapi.go:59] client config for ha-274394: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt", KeyFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key", CAFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0428 23:58:39.654290   36356 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.237:8443
	I0428 23:58:39.654479   36356 node_ready.go:35] waiting up to 6m0s for node "ha-274394-m02" to be "Ready" ...
	I0428 23:58:39.654555   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:39.654563   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:39.654570   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:39.654574   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:39.664649   36356 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0428 23:58:40.155296   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:40.155331   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:40.155342   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:40.155348   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:40.172701   36356 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0428 23:58:40.655311   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:40.655338   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:40.655350   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:40.655361   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:40.661218   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 23:58:41.155679   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:41.155700   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:41.155710   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:41.155713   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:41.159217   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:41.654979   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:41.655002   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:41.655011   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:41.655017   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:41.658216   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:41.659155   36356 node_ready.go:53] node "ha-274394-m02" has status "Ready":"False"
	I0428 23:58:42.155563   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:42.155591   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:42.155602   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:42.155608   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:42.159309   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:42.655232   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:42.655250   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:42.655258   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:42.655262   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:42.658587   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:43.155324   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:43.155378   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:43.155392   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:43.155397   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:43.160299   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:43.655539   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:43.655559   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:43.655567   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:43.655570   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:43.659388   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:43.660385   36356 node_ready.go:53] node "ha-274394-m02" has status "Ready":"False"
	I0428 23:58:44.154684   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:44.154707   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:44.154715   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:44.154720   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:44.158344   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:44.655229   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:44.684950   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:44.684969   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:44.684976   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:44.689026   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:45.155245   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:45.155266   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:45.155273   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:45.155278   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:45.158906   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:45.655245   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:45.655265   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:45.655272   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:45.655277   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:45.659245   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:46.155290   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:46.155311   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:46.155322   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:46.155328   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:46.160353   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 23:58:46.161167   36356 node_ready.go:53] node "ha-274394-m02" has status "Ready":"False"
	I0428 23:58:46.654806   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:46.654828   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:46.654835   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:46.654839   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:46.658180   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:47.155464   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:47.155486   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:47.155494   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:47.155498   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:47.160773   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 23:58:47.654818   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:47.654843   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:47.654850   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:47.654855   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:47.658410   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.154672   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:48.154697   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.154706   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.154710   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.159615   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:48.160955   36356 node_ready.go:49] node "ha-274394-m02" has status "Ready":"True"
	I0428 23:58:48.160973   36356 node_ready.go:38] duration metric: took 8.506473788s for node "ha-274394-m02" to be "Ready" ...
	I0428 23:58:48.160982   36356 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 23:58:48.161046   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0428 23:58:48.161055   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.161062   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.161068   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.169896   36356 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0428 23:58:48.176675   36356 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.176747   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rslhx
	I0428 23:58:48.176759   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.176765   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.176768   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.179899   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.180719   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:48.180735   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.180742   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.180747   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.184065   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.184638   36356 pod_ready.go:92] pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:48.184657   36356 pod_ready.go:81] duration metric: took 7.958913ms for pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.184666   36356 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.184714   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xkdcv
	I0428 23:58:48.184722   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.184730   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.184736   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.195110   36356 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0428 23:58:48.195972   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:48.195992   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.195999   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.196003   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.199213   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.199713   36356 pod_ready.go:92] pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:48.199733   36356 pod_ready.go:81] duration metric: took 15.060231ms for pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.199747   36356 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.199805   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394
	I0428 23:58:48.199815   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.199821   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.199825   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.203469   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.204875   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:48.204891   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.204898   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.204902   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.208879   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.209993   36356 pod_ready.go:92] pod "etcd-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:48.210011   36356 pod_ready.go:81] duration metric: took 10.253451ms for pod "etcd-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.210037   36356 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:48.210104   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:48.210112   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.210118   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.210123   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.212475   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:48.213184   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:48.213196   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.213203   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.213206   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.216781   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:48.710847   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:48.710869   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.710877   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.710881   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.717464   36356 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0428 23:58:48.718367   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:48.718385   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:48.718395   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:48.718402   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:48.721385   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:49.210458   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:49.210481   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:49.210488   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:49.210492   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:49.214589   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:49.215523   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:49.215538   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:49.215545   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:49.215549   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:49.218124   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:49.710303   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:49.710327   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:49.710334   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:49.710340   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:49.714259   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:49.715063   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:49.715080   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:49.715088   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:49.715092   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:49.718099   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:50.211244   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:50.211267   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:50.211277   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:50.211285   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:50.215809   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:50.216675   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:50.216695   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:50.216705   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:50.216711   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:50.220299   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:50.220861   36356 pod_ready.go:102] pod "etcd-ha-274394-m02" in "kube-system" namespace has status "Ready":"False"
	I0428 23:58:50.710227   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:50.710254   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:50.710266   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:50.710273   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:50.714777   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:50.715631   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:50.715647   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:50.715656   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:50.715661   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:50.718977   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:51.210455   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:51.210484   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:51.210492   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:51.210495   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:51.214863   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:51.215676   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:51.215695   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:51.215705   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:51.215711   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:51.219164   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:51.710844   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:51.710866   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:51.710874   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:51.710878   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:51.714505   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:51.715380   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:51.715395   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:51.715402   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:51.715405   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:51.718737   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:52.210936   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:52.210960   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:52.210969   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:52.210973   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:52.214315   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:52.215246   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:52.215264   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:52.215271   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:52.215276   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:52.217789   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:52.710926   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:52.710951   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:52.710960   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:52.710972   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:52.714784   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:52.715778   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:52.715797   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:52.715805   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:52.715810   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:52.718927   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:52.719705   36356 pod_ready.go:102] pod "etcd-ha-274394-m02" in "kube-system" namespace has status "Ready":"False"
	I0428 23:58:53.211232   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:53.211273   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:53.211284   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:53.211289   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:53.215284   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:53.215955   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:53.215970   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:53.215976   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:53.215980   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:53.218518   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:53.710221   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:53.710244   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:53.710254   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:53.710259   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:53.713769   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:53.714799   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:53.714816   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:53.714826   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:53.714831   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:53.717753   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:54.210997   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:54.211027   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:54.211035   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:54.211039   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:54.214689   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:54.215567   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:54.215581   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:54.215587   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:54.215592   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:54.219396   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:54.710980   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:54.710999   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:54.711006   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:54.711010   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:54.714596   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:54.715401   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:54.715416   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:54.715421   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:54.715425   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:54.718723   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:55.211159   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:55.211186   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:55.211199   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:55.211207   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:55.215703   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:55.216484   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:55.216501   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:55.216507   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:55.216511   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:55.219394   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:55.220002   36356 pod_ready.go:102] pod "etcd-ha-274394-m02" in "kube-system" namespace has status "Ready":"False"
	I0428 23:58:55.710286   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:55.710320   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:55.710330   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:55.710335   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:55.714152   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:55.715422   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:55.715435   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:55.715442   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:55.715446   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:55.718421   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:56.210425   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:56.210447   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:56.210455   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:56.210459   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:56.214208   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:56.215146   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:56.215160   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:56.215167   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:56.215171   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:56.217756   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:56.710994   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:56.711013   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:56.711021   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:56.711024   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:56.713949   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:56.714853   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:56.714867   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:56.714872   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:56.714876   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:56.717415   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.210188   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0428 23:58:57.210211   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.210219   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.210223   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.213897   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:57.215006   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:57.215024   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.215033   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.215039   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.217552   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.218220   36356 pod_ready.go:92] pod "etcd-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.218236   36356 pod_ready.go:81] duration metric: took 9.008187231s for pod "etcd-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.218250   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.218295   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-274394
	I0428 23:58:57.218302   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.218308   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.218315   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.220629   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.221425   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:57.221443   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.221453   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.221462   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.223509   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.224113   36356 pod_ready.go:92] pod "kube-apiserver-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.224133   36356 pod_ready.go:81] duration metric: took 5.873511ms for pod "kube-apiserver-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.224144   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.224215   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-274394-m02
	I0428 23:58:57.224227   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.224236   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.224244   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.226285   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.227060   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:57.227075   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.227082   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.227087   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.229206   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.229767   36356 pod_ready.go:92] pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.229788   36356 pod_ready.go:81] duration metric: took 5.632505ms for pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.229799   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.229849   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394
	I0428 23:58:57.229858   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.229864   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.229868   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.232892   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:57.233655   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:57.233670   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.233676   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.233682   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.235860   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.236478   36356 pod_ready.go:92] pod "kube-controller-manager-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.236500   36356 pod_ready.go:81] duration metric: took 6.69293ms for pod "kube-controller-manager-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.236513   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.236567   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394-m02
	I0428 23:58:57.236582   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.236591   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.236603   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.239009   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.239661   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:57.239676   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.239684   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.239691   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.242103   36356 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0428 23:58:57.242658   36356 pod_ready.go:92] pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.242674   36356 pod_ready.go:81] duration metric: took 6.151599ms for pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.242681   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g95c9" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.411103   36356 request.go:629] Waited for 168.362894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95c9
	I0428 23:58:57.411156   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95c9
	I0428 23:58:57.411161   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.411169   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.411174   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.414521   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:57.610626   36356 request.go:629] Waited for 195.367099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:57.610752   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:57.610822   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.610833   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.610846   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.615056   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:57.615713   36356 pod_ready.go:92] pod "kube-proxy-g95c9" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:57.615731   36356 pod_ready.go:81] duration metric: took 373.044367ms for pod "kube-proxy-g95c9" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.615740   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pwbfs" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:57.811010   36356 request.go:629] Waited for 195.183352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwbfs
	I0428 23:58:57.811064   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwbfs
	I0428 23:58:57.811068   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:57.811076   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:57.811081   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:57.815095   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:58.010219   36356 request.go:629] Waited for 194.281833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:58.010339   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:58.010358   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.010365   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.010370   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.014383   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:58.015289   36356 pod_ready.go:92] pod "kube-proxy-pwbfs" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:58.015307   36356 pod_ready.go:81] duration metric: took 399.560892ms for pod "kube-proxy-pwbfs" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:58.015315   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:58.210510   36356 request.go:629] Waited for 195.105309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394
	I0428 23:58:58.210572   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394
	I0428 23:58:58.210577   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.210583   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.210588   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.215302   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:58.410679   36356 request.go:629] Waited for 194.371002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:58.410749   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0428 23:58:58.410755   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.410764   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.410770   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.414880   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:58.415576   36356 pod_ready.go:92] pod "kube-scheduler-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:58.415594   36356 pod_ready.go:81] duration metric: took 400.27299ms for pod "kube-scheduler-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:58.415604   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:58.610680   36356 request.go:629] Waited for 195.022143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m02
	I0428 23:58:58.610745   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m02
	I0428 23:58:58.610751   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.610756   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.610760   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.615040   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:58:58.811088   36356 request.go:629] Waited for 195.345352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:58.811150   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0428 23:58:58.811156   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.811167   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.811171   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.814458   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:58.815300   36356 pod_ready.go:92] pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0428 23:58:58.815317   36356 pod_ready.go:81] duration metric: took 399.706734ms for pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0428 23:58:58.815328   36356 pod_ready.go:38] duration metric: took 10.654327215s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0428 23:58:58.815340   36356 api_server.go:52] waiting for apiserver process to appear ...
	I0428 23:58:58.815386   36356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0428 23:58:58.831473   36356 api_server.go:72] duration metric: took 19.537927218s to wait for apiserver process to appear ...
	I0428 23:58:58.831505   36356 api_server.go:88] waiting for apiserver healthz status ...
	I0428 23:58:58.831530   36356 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0428 23:58:58.836498   36356 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0428 23:58:58.836579   36356 round_trippers.go:463] GET https://192.168.39.237:8443/version
	I0428 23:58:58.836595   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:58.836613   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:58.836621   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:58.837583   36356 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0428 23:58:58.837822   36356 api_server.go:141] control plane version: v1.30.0
	I0428 23:58:58.837849   36356 api_server.go:131] duration metric: took 6.335764ms to wait for apiserver health ...
	I0428 23:58:58.837859   36356 system_pods.go:43] waiting for kube-system pods to appear ...
	I0428 23:58:59.010251   36356 request.go:629] Waited for 172.319916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0428 23:58:59.010324   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0428 23:58:59.010354   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:59.010369   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:59.010377   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:59.016252   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 23:58:59.023090   36356 system_pods.go:59] 17 kube-system pods found
	I0428 23:58:59.023126   36356 system_pods.go:61] "coredns-7db6d8ff4d-rslhx" [b73501ce-7591-45a5-b59e-331f7752c71b] Running
	I0428 23:58:59.023132   36356 system_pods.go:61] "coredns-7db6d8ff4d-xkdcv" [60272694-edd8-4a8c-abd9-707cdb1355ea] Running
	I0428 23:58:59.023136   36356 system_pods.go:61] "etcd-ha-274394" [e951aad6-16ba-42de-bcb6-a90ec5388fc8] Running
	I0428 23:58:59.023140   36356 system_pods.go:61] "etcd-ha-274394-m02" [63565823-56bf-4bd7-b8da-604a1b0d4d30] Running
	I0428 23:58:59.023143   36356 system_pods.go:61] "kindnet-6qf7q" [f00be25f-cefa-41ac-8c38-1d52f337e8b9] Running
	I0428 23:58:59.023146   36356 system_pods.go:61] "kindnet-p6qmw" [528219cb-5850-471c-97de-c31dcca693b1] Running
	I0428 23:58:59.023150   36356 system_pods.go:61] "kube-apiserver-ha-274394" [f20281d2-0f10-43b0-9a51-495d03b5a5c3] Running
	I0428 23:58:59.023155   36356 system_pods.go:61] "kube-apiserver-ha-274394-m02" [0f8b7b21-a990-447f-a3b8-6acdccf078d3] Running
	I0428 23:58:59.023158   36356 system_pods.go:61] "kube-controller-manager-ha-274394" [8fb69743-3a7b-4fad-838c-a45e1667724c] Running
	I0428 23:58:59.023161   36356 system_pods.go:61] "kube-controller-manager-ha-274394-m02" [429f2ab6-9771-47b2-b827-d183897f6276] Running
	I0428 23:58:59.023167   36356 system_pods.go:61] "kube-proxy-g95c9" [5be866d8-0014-44c7-a4cd-e93655e9c599] Running
	I0428 23:58:59.023172   36356 system_pods.go:61] "kube-proxy-pwbfs" [5303f947-6c3f-47b5-b396-33b92049d48f] Running
	I0428 23:58:59.023175   36356 system_pods.go:61] "kube-scheduler-ha-274394" [22d206f5-49cc-43d0-939e-249961518bb4] Running
	I0428 23:58:59.023180   36356 system_pods.go:61] "kube-scheduler-ha-274394-m02" [3371d359-adb1-4111-8ae1-44934bad24c3] Running
	I0428 23:58:59.023183   36356 system_pods.go:61] "kube-vip-ha-274394" [ce6151de-754a-4f15-94d4-71f4fb9cbd21] Running
	I0428 23:58:59.023186   36356 system_pods.go:61] "kube-vip-ha-274394-m02" [f276f128-37bf-4f93-a573-e6b491fa66cd] Running
	I0428 23:58:59.023189   36356 system_pods.go:61] "storage-provisioner" [b291d6ca-3a9b-4dd0-b0e9-a183347e7d26] Running
	I0428 23:58:59.023194   36356 system_pods.go:74] duration metric: took 185.326461ms to wait for pod list to return data ...
	I0428 23:58:59.023207   36356 default_sa.go:34] waiting for default service account to be created ...
	I0428 23:58:59.210913   36356 request.go:629] Waited for 187.648663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0428 23:58:59.210979   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0428 23:58:59.210993   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:59.211002   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:59.211013   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:59.214865   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:59.215086   36356 default_sa.go:45] found service account: "default"
	I0428 23:58:59.215102   36356 default_sa.go:55] duration metric: took 191.890036ms for default service account to be created ...
	I0428 23:58:59.215110   36356 system_pods.go:116] waiting for k8s-apps to be running ...
	I0428 23:58:59.410522   36356 request.go:629] Waited for 195.32449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0428 23:58:59.410587   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0428 23:58:59.410592   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:59.410599   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:59.410603   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:59.416485   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0428 23:58:59.422169   36356 system_pods.go:86] 17 kube-system pods found
	I0428 23:58:59.422199   36356 system_pods.go:89] "coredns-7db6d8ff4d-rslhx" [b73501ce-7591-45a5-b59e-331f7752c71b] Running
	I0428 23:58:59.422207   36356 system_pods.go:89] "coredns-7db6d8ff4d-xkdcv" [60272694-edd8-4a8c-abd9-707cdb1355ea] Running
	I0428 23:58:59.422214   36356 system_pods.go:89] "etcd-ha-274394" [e951aad6-16ba-42de-bcb6-a90ec5388fc8] Running
	I0428 23:58:59.422220   36356 system_pods.go:89] "etcd-ha-274394-m02" [63565823-56bf-4bd7-b8da-604a1b0d4d30] Running
	I0428 23:58:59.422226   36356 system_pods.go:89] "kindnet-6qf7q" [f00be25f-cefa-41ac-8c38-1d52f337e8b9] Running
	I0428 23:58:59.422232   36356 system_pods.go:89] "kindnet-p6qmw" [528219cb-5850-471c-97de-c31dcca693b1] Running
	I0428 23:58:59.422237   36356 system_pods.go:89] "kube-apiserver-ha-274394" [f20281d2-0f10-43b0-9a51-495d03b5a5c3] Running
	I0428 23:58:59.422243   36356 system_pods.go:89] "kube-apiserver-ha-274394-m02" [0f8b7b21-a990-447f-a3b8-6acdccf078d3] Running
	I0428 23:58:59.422251   36356 system_pods.go:89] "kube-controller-manager-ha-274394" [8fb69743-3a7b-4fad-838c-a45e1667724c] Running
	I0428 23:58:59.422265   36356 system_pods.go:89] "kube-controller-manager-ha-274394-m02" [429f2ab6-9771-47b2-b827-d183897f6276] Running
	I0428 23:58:59.422275   36356 system_pods.go:89] "kube-proxy-g95c9" [5be866d8-0014-44c7-a4cd-e93655e9c599] Running
	I0428 23:58:59.422283   36356 system_pods.go:89] "kube-proxy-pwbfs" [5303f947-6c3f-47b5-b396-33b92049d48f] Running
	I0428 23:58:59.422293   36356 system_pods.go:89] "kube-scheduler-ha-274394" [22d206f5-49cc-43d0-939e-249961518bb4] Running
	I0428 23:58:59.422300   36356 system_pods.go:89] "kube-scheduler-ha-274394-m02" [3371d359-adb1-4111-8ae1-44934bad24c3] Running
	I0428 23:58:59.422310   36356 system_pods.go:89] "kube-vip-ha-274394" [ce6151de-754a-4f15-94d4-71f4fb9cbd21] Running
	I0428 23:58:59.422316   36356 system_pods.go:89] "kube-vip-ha-274394-m02" [f276f128-37bf-4f93-a573-e6b491fa66cd] Running
	I0428 23:58:59.422325   36356 system_pods.go:89] "storage-provisioner" [b291d6ca-3a9b-4dd0-b0e9-a183347e7d26] Running
	I0428 23:58:59.422337   36356 system_pods.go:126] duration metric: took 207.21932ms to wait for k8s-apps to be running ...
	I0428 23:58:59.422349   36356 system_svc.go:44] waiting for kubelet service to be running ....
	I0428 23:58:59.422404   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 23:58:59.441950   36356 system_svc.go:56] duration metric: took 19.591591ms WaitForService to wait for kubelet
	I0428 23:58:59.441982   36356 kubeadm.go:576] duration metric: took 20.148438728s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0428 23:58:59.442004   36356 node_conditions.go:102] verifying NodePressure condition ...
	I0428 23:58:59.610455   36356 request.go:629] Waited for 168.364577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes
	I0428 23:58:59.610505   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes
	I0428 23:58:59.610515   36356 round_trippers.go:469] Request Headers:
	I0428 23:58:59.610522   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:58:59.610526   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:58:59.614523   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0428 23:58:59.615695   36356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 23:58:59.615718   36356 node_conditions.go:123] node cpu capacity is 2
	I0428 23:58:59.615731   36356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0428 23:58:59.615735   36356 node_conditions.go:123] node cpu capacity is 2
	I0428 23:58:59.615741   36356 node_conditions.go:105] duration metric: took 173.731434ms to run NodePressure ...
	I0428 23:58:59.615756   36356 start.go:240] waiting for startup goroutines ...
	I0428 23:58:59.615797   36356 start.go:254] writing updated cluster config ...
	I0428 23:58:59.617862   36356 out.go:177] 
	I0428 23:58:59.619360   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:58:59.619475   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:58:59.621275   36356 out.go:177] * Starting "ha-274394-m03" control-plane node in "ha-274394" cluster
	I0428 23:58:59.622428   36356 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:58:59.622455   36356 cache.go:56] Caching tarball of preloaded images
	I0428 23:58:59.622553   36356 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0428 23:58:59.622565   36356 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0428 23:58:59.622681   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:58:59.622874   36356 start.go:360] acquireMachinesLock for ha-274394-m03: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0428 23:58:59.622929   36356 start.go:364] duration metric: took 33.665µs to acquireMachinesLock for "ha-274394-m03"
	I0428 23:58:59.622950   36356 start.go:93] Provisioning new machine with config: &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:58:59.623064   36356 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0428 23:58:59.624667   36356 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0428 23:58:59.624758   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:58:59.624802   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:58:59.641214   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42821
	I0428 23:58:59.641727   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:58:59.642309   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:58:59.642334   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:58:59.642611   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:58:59.642804   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetMachineName
	I0428 23:58:59.642927   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:58:59.643066   36356 start.go:159] libmachine.API.Create for "ha-274394" (driver="kvm2")
	I0428 23:58:59.643091   36356 client.go:168] LocalClient.Create starting
	I0428 23:58:59.643121   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem
	I0428 23:58:59.643154   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:58:59.643179   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:58:59.643227   36356 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem
	I0428 23:58:59.643249   36356 main.go:141] libmachine: Decoding PEM data...
	I0428 23:58:59.643260   36356 main.go:141] libmachine: Parsing certificate...
	I0428 23:58:59.643281   36356 main.go:141] libmachine: Running pre-create checks...
	I0428 23:58:59.643296   36356 main.go:141] libmachine: (ha-274394-m03) Calling .PreCreateCheck
	I0428 23:58:59.643479   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetConfigRaw
	I0428 23:58:59.643879   36356 main.go:141] libmachine: Creating machine...
	I0428 23:58:59.643892   36356 main.go:141] libmachine: (ha-274394-m03) Calling .Create
	I0428 23:58:59.644001   36356 main.go:141] libmachine: (ha-274394-m03) Creating KVM machine...
	I0428 23:58:59.645183   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found existing default KVM network
	I0428 23:58:59.645266   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found existing private KVM network mk-ha-274394
	I0428 23:58:59.645383   36356 main.go:141] libmachine: (ha-274394-m03) Setting up store path in /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03 ...
	I0428 23:58:59.645406   36356 main.go:141] libmachine: (ha-274394-m03) Building disk image from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0428 23:58:59.645459   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:58:59.645378   37169 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:58:59.645569   36356 main.go:141] libmachine: (ha-274394-m03) Downloading /home/jenkins/minikube-integration/17977-13393/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0428 23:58:59.868035   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:58:59.867896   37169 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa...
	I0428 23:58:59.956656   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:58:59.956555   37169 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/ha-274394-m03.rawdisk...
	I0428 23:58:59.956683   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Writing magic tar header
	I0428 23:58:59.956697   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Writing SSH key tar header
	I0428 23:58:59.956708   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:58:59.956666   37169 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03 ...
	I0428 23:58:59.956777   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03
	I0428 23:58:59.956822   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines
	I0428 23:58:59.956840   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03 (perms=drwx------)
	I0428 23:58:59.956859   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines (perms=drwxr-xr-x)
	I0428 23:58:59.956873   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube (perms=drwxr-xr-x)
	I0428 23:58:59.956887   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393 (perms=drwxrwxr-x)
	I0428 23:58:59.956902   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:58:59.956914   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0428 23:58:59.956933   36356 main.go:141] libmachine: (ha-274394-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0428 23:58:59.956960   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393
	I0428 23:58:59.956971   36356 main.go:141] libmachine: (ha-274394-m03) Creating domain...
	I0428 23:58:59.956990   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0428 23:58:59.957007   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home/jenkins
	I0428 23:58:59.957021   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Checking permissions on dir: /home
	I0428 23:58:59.957038   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Skipping /home - not owner
	I0428 23:58:59.957806   36356 main.go:141] libmachine: (ha-274394-m03) define libvirt domain using xml: 
	I0428 23:58:59.957828   36356 main.go:141] libmachine: (ha-274394-m03) <domain type='kvm'>
	I0428 23:58:59.957838   36356 main.go:141] libmachine: (ha-274394-m03)   <name>ha-274394-m03</name>
	I0428 23:58:59.957853   36356 main.go:141] libmachine: (ha-274394-m03)   <memory unit='MiB'>2200</memory>
	I0428 23:58:59.957866   36356 main.go:141] libmachine: (ha-274394-m03)   <vcpu>2</vcpu>
	I0428 23:58:59.957877   36356 main.go:141] libmachine: (ha-274394-m03)   <features>
	I0428 23:58:59.957887   36356 main.go:141] libmachine: (ha-274394-m03)     <acpi/>
	I0428 23:58:59.957898   36356 main.go:141] libmachine: (ha-274394-m03)     <apic/>
	I0428 23:58:59.957909   36356 main.go:141] libmachine: (ha-274394-m03)     <pae/>
	I0428 23:58:59.957920   36356 main.go:141] libmachine: (ha-274394-m03)     
	I0428 23:58:59.957929   36356 main.go:141] libmachine: (ha-274394-m03)   </features>
	I0428 23:58:59.957941   36356 main.go:141] libmachine: (ha-274394-m03)   <cpu mode='host-passthrough'>
	I0428 23:58:59.957968   36356 main.go:141] libmachine: (ha-274394-m03)   
	I0428 23:58:59.957989   36356 main.go:141] libmachine: (ha-274394-m03)   </cpu>
	I0428 23:58:59.958001   36356 main.go:141] libmachine: (ha-274394-m03)   <os>
	I0428 23:58:59.958017   36356 main.go:141] libmachine: (ha-274394-m03)     <type>hvm</type>
	I0428 23:58:59.958046   36356 main.go:141] libmachine: (ha-274394-m03)     <boot dev='cdrom'/>
	I0428 23:58:59.958059   36356 main.go:141] libmachine: (ha-274394-m03)     <boot dev='hd'/>
	I0428 23:58:59.958069   36356 main.go:141] libmachine: (ha-274394-m03)     <bootmenu enable='no'/>
	I0428 23:58:59.958080   36356 main.go:141] libmachine: (ha-274394-m03)   </os>
	I0428 23:58:59.958092   36356 main.go:141] libmachine: (ha-274394-m03)   <devices>
	I0428 23:58:59.958105   36356 main.go:141] libmachine: (ha-274394-m03)     <disk type='file' device='cdrom'>
	I0428 23:58:59.958119   36356 main.go:141] libmachine: (ha-274394-m03)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/boot2docker.iso'/>
	I0428 23:58:59.958132   36356 main.go:141] libmachine: (ha-274394-m03)       <target dev='hdc' bus='scsi'/>
	I0428 23:58:59.958144   36356 main.go:141] libmachine: (ha-274394-m03)       <readonly/>
	I0428 23:58:59.958155   36356 main.go:141] libmachine: (ha-274394-m03)     </disk>
	I0428 23:58:59.958169   36356 main.go:141] libmachine: (ha-274394-m03)     <disk type='file' device='disk'>
	I0428 23:58:59.958187   36356 main.go:141] libmachine: (ha-274394-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0428 23:58:59.958206   36356 main.go:141] libmachine: (ha-274394-m03)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/ha-274394-m03.rawdisk'/>
	I0428 23:58:59.958218   36356 main.go:141] libmachine: (ha-274394-m03)       <target dev='hda' bus='virtio'/>
	I0428 23:58:59.958230   36356 main.go:141] libmachine: (ha-274394-m03)     </disk>
	I0428 23:58:59.958242   36356 main.go:141] libmachine: (ha-274394-m03)     <interface type='network'>
	I0428 23:58:59.958274   36356 main.go:141] libmachine: (ha-274394-m03)       <source network='mk-ha-274394'/>
	I0428 23:58:59.958300   36356 main.go:141] libmachine: (ha-274394-m03)       <model type='virtio'/>
	I0428 23:58:59.958313   36356 main.go:141] libmachine: (ha-274394-m03)     </interface>
	I0428 23:58:59.958329   36356 main.go:141] libmachine: (ha-274394-m03)     <interface type='network'>
	I0428 23:58:59.958342   36356 main.go:141] libmachine: (ha-274394-m03)       <source network='default'/>
	I0428 23:58:59.958350   36356 main.go:141] libmachine: (ha-274394-m03)       <model type='virtio'/>
	I0428 23:58:59.958363   36356 main.go:141] libmachine: (ha-274394-m03)     </interface>
	I0428 23:58:59.958371   36356 main.go:141] libmachine: (ha-274394-m03)     <serial type='pty'>
	I0428 23:58:59.958382   36356 main.go:141] libmachine: (ha-274394-m03)       <target port='0'/>
	I0428 23:58:59.958390   36356 main.go:141] libmachine: (ha-274394-m03)     </serial>
	I0428 23:58:59.958401   36356 main.go:141] libmachine: (ha-274394-m03)     <console type='pty'>
	I0428 23:58:59.958417   36356 main.go:141] libmachine: (ha-274394-m03)       <target type='serial' port='0'/>
	I0428 23:58:59.958433   36356 main.go:141] libmachine: (ha-274394-m03)     </console>
	I0428 23:58:59.958450   36356 main.go:141] libmachine: (ha-274394-m03)     <rng model='virtio'>
	I0428 23:58:59.958464   36356 main.go:141] libmachine: (ha-274394-m03)       <backend model='random'>/dev/random</backend>
	I0428 23:58:59.958474   36356 main.go:141] libmachine: (ha-274394-m03)     </rng>
	I0428 23:58:59.958483   36356 main.go:141] libmachine: (ha-274394-m03)     
	I0428 23:58:59.958497   36356 main.go:141] libmachine: (ha-274394-m03)     
	I0428 23:58:59.958508   36356 main.go:141] libmachine: (ha-274394-m03)   </devices>
	I0428 23:58:59.958517   36356 main.go:141] libmachine: (ha-274394-m03) </domain>
	I0428 23:58:59.958532   36356 main.go:141] libmachine: (ha-274394-m03) 
	I0428 23:58:59.965013   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:ba:70:2d in network default
	I0428 23:58:59.965465   36356 main.go:141] libmachine: (ha-274394-m03) Ensuring networks are active...
	I0428 23:58:59.965490   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:58:59.966174   36356 main.go:141] libmachine: (ha-274394-m03) Ensuring network default is active
	I0428 23:58:59.966465   36356 main.go:141] libmachine: (ha-274394-m03) Ensuring network mk-ha-274394 is active
	I0428 23:58:59.966765   36356 main.go:141] libmachine: (ha-274394-m03) Getting domain xml...
	I0428 23:58:59.967422   36356 main.go:141] libmachine: (ha-274394-m03) Creating domain...
	I0428 23:59:01.202748   36356 main.go:141] libmachine: (ha-274394-m03) Waiting to get IP...
	I0428 23:59:01.203443   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:01.203897   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:01.203938   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:01.203872   37169 retry.go:31] will retry after 282.787142ms: waiting for machine to come up
	I0428 23:59:01.488289   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:01.488845   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:01.488880   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:01.488821   37169 retry.go:31] will retry after 311.074996ms: waiting for machine to come up
	I0428 23:59:01.801101   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:01.801590   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:01.801615   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:01.801538   37169 retry.go:31] will retry after 333.347197ms: waiting for machine to come up
	I0428 23:59:02.136222   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:02.136685   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:02.136722   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:02.136662   37169 retry.go:31] will retry after 515.127499ms: waiting for machine to come up
	I0428 23:59:02.652873   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:02.653262   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:02.653290   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:02.653217   37169 retry.go:31] will retry after 472.600429ms: waiting for machine to come up
	I0428 23:59:03.127829   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:03.128260   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:03.128285   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:03.128216   37169 retry.go:31] will retry after 918.328461ms: waiting for machine to come up
	I0428 23:59:04.047989   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:04.048469   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:04.048501   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:04.048401   37169 retry.go:31] will retry after 1.054046887s: waiting for machine to come up
	I0428 23:59:05.104188   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:05.104616   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:05.104654   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:05.104563   37169 retry.go:31] will retry after 1.317728284s: waiting for machine to come up
	I0428 23:59:06.424099   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:06.424567   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:06.424603   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:06.424502   37169 retry.go:31] will retry after 1.54429179s: waiting for machine to come up
	I0428 23:59:07.971097   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:07.971619   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:07.971640   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:07.971572   37169 retry.go:31] will retry after 1.943348331s: waiting for machine to come up
	I0428 23:59:09.916650   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:09.917110   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:09.917138   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:09.917059   37169 retry.go:31] will retry after 2.643143471s: waiting for machine to come up
	I0428 23:59:12.563295   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:12.563756   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:12.563783   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:12.563719   37169 retry.go:31] will retry after 3.420586328s: waiting for machine to come up
	I0428 23:59:15.986099   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:15.986542   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:15.986573   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:15.986487   37169 retry.go:31] will retry after 3.581143816s: waiting for machine to come up
	I0428 23:59:19.571466   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:19.571889   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find current IP address of domain ha-274394-m03 in network mk-ha-274394
	I0428 23:59:19.571918   36356 main.go:141] libmachine: (ha-274394-m03) DBG | I0428 23:59:19.571850   37169 retry.go:31] will retry after 5.55088001s: waiting for machine to come up
	I0428 23:59:25.124118   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:25.124562   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has current primary IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:25.124584   36356 main.go:141] libmachine: (ha-274394-m03) Found IP for machine: 192.168.39.250
	I0428 23:59:25.124598   36356 main.go:141] libmachine: (ha-274394-m03) Reserving static IP address...
	I0428 23:59:25.124921   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find host DHCP lease matching {name: "ha-274394-m03", mac: "52:54:00:0d:4c:dd", ip: "192.168.39.250"} in network mk-ha-274394
	I0428 23:59:25.197142   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Getting to WaitForSSH function...
	I0428 23:59:25.197166   36356 main.go:141] libmachine: (ha-274394-m03) Reserved static IP address: 192.168.39.250
	I0428 23:59:25.197212   36356 main.go:141] libmachine: (ha-274394-m03) Waiting for SSH to be available...
	I0428 23:59:25.199898   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:25.200254   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394
	I0428 23:59:25.200280   36356 main.go:141] libmachine: (ha-274394-m03) DBG | unable to find defined IP address of network mk-ha-274394 interface with MAC address 52:54:00:0d:4c:dd
	I0428 23:59:25.200392   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Using SSH client type: external
	I0428 23:59:25.200415   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa (-rw-------)
	I0428 23:59:25.200454   36356 main.go:141] libmachine: (ha-274394-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0428 23:59:25.200480   36356 main.go:141] libmachine: (ha-274394-m03) DBG | About to run SSH command:
	I0428 23:59:25.200505   36356 main.go:141] libmachine: (ha-274394-m03) DBG | exit 0
	I0428 23:59:25.204174   36356 main.go:141] libmachine: (ha-274394-m03) DBG | SSH cmd err, output: exit status 255: 
	I0428 23:59:25.204192   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0428 23:59:25.204202   36356 main.go:141] libmachine: (ha-274394-m03) DBG | command : exit 0
	I0428 23:59:25.204210   36356 main.go:141] libmachine: (ha-274394-m03) DBG | err     : exit status 255
	I0428 23:59:25.204221   36356 main.go:141] libmachine: (ha-274394-m03) DBG | output  : 
	I0428 23:59:28.206195   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Getting to WaitForSSH function...
	I0428 23:59:28.209965   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.210449   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.210480   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.210638   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Using SSH client type: external
	I0428 23:59:28.210667   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa (-rw-------)
	I0428 23:59:28.210707   36356 main.go:141] libmachine: (ha-274394-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0428 23:59:28.210727   36356 main.go:141] libmachine: (ha-274394-m03) DBG | About to run SSH command:
	I0428 23:59:28.210742   36356 main.go:141] libmachine: (ha-274394-m03) DBG | exit 0
	I0428 23:59:28.338185   36356 main.go:141] libmachine: (ha-274394-m03) DBG | SSH cmd err, output: <nil>: 
	I0428 23:59:28.338430   36356 main.go:141] libmachine: (ha-274394-m03) KVM machine creation complete!
	I0428 23:59:28.338791   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetConfigRaw
	I0428 23:59:28.339377   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:28.339584   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:28.339791   36356 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0428 23:59:28.339811   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetState
	I0428 23:59:28.341407   36356 main.go:141] libmachine: Detecting operating system of created instance...
	I0428 23:59:28.341426   36356 main.go:141] libmachine: Waiting for SSH to be available...
	I0428 23:59:28.341433   36356 main.go:141] libmachine: Getting to WaitForSSH function...
	I0428 23:59:28.341441   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:28.343848   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.344223   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.344248   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.344376   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:28.344530   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.344668   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.344809   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:28.344963   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:28.345166   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:28.345177   36356 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0428 23:59:28.457369   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:59:28.457393   36356 main.go:141] libmachine: Detecting the provisioner...
	I0428 23:59:28.457401   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:28.459831   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.460234   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.460254   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.460462   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:28.460635   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.460795   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.460929   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:28.461110   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:28.461319   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:28.461334   36356 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0428 23:59:28.575513   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0428 23:59:28.575577   36356 main.go:141] libmachine: found compatible host: buildroot
	I0428 23:59:28.575591   36356 main.go:141] libmachine: Provisioning with buildroot...
	I0428 23:59:28.575599   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetMachineName
	I0428 23:59:28.575836   36356 buildroot.go:166] provisioning hostname "ha-274394-m03"
	I0428 23:59:28.575863   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetMachineName
	I0428 23:59:28.576068   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:28.578532   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.578931   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.578960   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.579060   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:28.579211   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.579335   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.579444   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:28.579637   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:28.579820   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:28.579837   36356 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-274394-m03 && echo "ha-274394-m03" | sudo tee /etc/hostname
	I0428 23:59:28.712688   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-274394-m03
	
	I0428 23:59:28.712717   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:28.715733   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.716152   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.716191   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.716417   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:28.716624   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.716814   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:28.716966   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:28.717155   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:28.717357   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:28.717380   36356 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-274394-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-274394-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-274394-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0428 23:59:28.841508   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0428 23:59:28.841559   36356 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0428 23:59:28.841576   36356 buildroot.go:174] setting up certificates
	I0428 23:59:28.841586   36356 provision.go:84] configureAuth start
	I0428 23:59:28.841595   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetMachineName
	I0428 23:59:28.841879   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0428 23:59:28.845193   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.845548   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.845578   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.845693   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:28.847976   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.848368   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:28.848393   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:28.848514   36356 provision.go:143] copyHostCerts
	I0428 23:59:28.848537   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:59:28.848565   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0428 23:59:28.848573   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0428 23:59:28.848635   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0428 23:59:28.848714   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:59:28.848732   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0428 23:59:28.848739   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0428 23:59:28.848762   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0428 23:59:28.848811   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:59:28.848827   36356 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0428 23:59:28.848833   36356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0428 23:59:28.848853   36356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0428 23:59:28.848903   36356 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.ha-274394-m03 san=[127.0.0.1 192.168.39.250 ha-274394-m03 localhost minikube]
	I0428 23:59:29.012952   36356 provision.go:177] copyRemoteCerts
	I0428 23:59:29.013023   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0428 23:59:29.013055   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:29.015566   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.015904   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.015935   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.016127   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.016358   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.016550   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.016710   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0428 23:59:29.109376   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0428 23:59:29.109447   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0428 23:59:29.140078   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0428 23:59:29.140132   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0428 23:59:29.170421   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0428 23:59:29.170498   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0428 23:59:29.196503   36356 provision.go:87] duration metric: took 354.905712ms to configureAuth
	I0428 23:59:29.196530   36356 buildroot.go:189] setting minikube options for container-runtime
	I0428 23:59:29.196783   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:59:29.196853   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:29.199543   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.199885   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.199907   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.200083   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.200254   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.200404   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.200525   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.200690   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:29.200838   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:29.200853   36356 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0428 23:59:29.503246   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0428 23:59:29.503276   36356 main.go:141] libmachine: Checking connection to Docker...
	I0428 23:59:29.503287   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetURL
	I0428 23:59:29.504495   36356 main.go:141] libmachine: (ha-274394-m03) DBG | Using libvirt version 6000000
	I0428 23:59:29.506850   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.507214   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.507241   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.507419   36356 main.go:141] libmachine: Docker is up and running!
	I0428 23:59:29.507439   36356 main.go:141] libmachine: Reticulating splines...
	I0428 23:59:29.507446   36356 client.go:171] duration metric: took 29.864346558s to LocalClient.Create
	I0428 23:59:29.507469   36356 start.go:167] duration metric: took 29.864403952s to libmachine.API.Create "ha-274394"
	I0428 23:59:29.507478   36356 start.go:293] postStartSetup for "ha-274394-m03" (driver="kvm2")
	I0428 23:59:29.507488   36356 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0428 23:59:29.507509   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:29.507729   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0428 23:59:29.507746   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:29.510131   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.510522   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.510563   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.510678   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.510845   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.511001   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.511156   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0428 23:59:29.596901   36356 ssh_runner.go:195] Run: cat /etc/os-release
	I0428 23:59:29.601706   36356 info.go:137] Remote host: Buildroot 2023.02.9
	I0428 23:59:29.601727   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0428 23:59:29.601789   36356 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0428 23:59:29.601886   36356 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0428 23:59:29.601896   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /etc/ssl/certs/207272.pem
	I0428 23:59:29.602001   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0428 23:59:29.612235   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:59:29.639858   36356 start.go:296] duration metric: took 132.371288ms for postStartSetup
	I0428 23:59:29.639898   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetConfigRaw
	I0428 23:59:29.640442   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0428 23:59:29.643445   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.643832   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.643857   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.644181   36356 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0428 23:59:29.644406   36356 start.go:128] duration metric: took 30.021329967s to createHost
	I0428 23:59:29.644432   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:29.646565   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.646909   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.646939   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.647052   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.647200   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.647366   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.647477   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.647640   36356 main.go:141] libmachine: Using SSH client type: native
	I0428 23:59:29.647806   36356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0428 23:59:29.647818   36356 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0428 23:59:29.767512   36356 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714348769.755757207
	
	I0428 23:59:29.767540   36356 fix.go:216] guest clock: 1714348769.755757207
	I0428 23:59:29.767552   36356 fix.go:229] Guest: 2024-04-28 23:59:29.755757207 +0000 UTC Remote: 2024-04-28 23:59:29.644418148 +0000 UTC m=+165.090306589 (delta=111.339059ms)
	I0428 23:59:29.767569   36356 fix.go:200] guest clock delta is within tolerance: 111.339059ms
	I0428 23:59:29.767575   36356 start.go:83] releasing machines lock for "ha-274394-m03", held for 30.144638005s
	I0428 23:59:29.767599   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:29.767844   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0428 23:59:29.770233   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.770627   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.770658   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.772993   36356 out.go:177] * Found network options:
	I0428 23:59:29.774437   36356 out.go:177]   - NO_PROXY=192.168.39.237,192.168.39.43
	W0428 23:59:29.775869   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 23:59:29.775892   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 23:59:29.775908   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:29.776440   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:29.776628   36356 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0428 23:59:29.776720   36356 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0428 23:59:29.776749   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	W0428 23:59:29.776986   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	W0428 23:59:29.777012   36356 proxy.go:119] fail to check proxy env: Error ip not in block
	I0428 23:59:29.777072   36356 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0428 23:59:29.777091   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0428 23:59:29.779588   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.779789   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.780023   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.780062   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.780288   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:29.780325   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.780341   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:29.780487   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0428 23:59:29.780497   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.780688   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0428 23:59:29.780693   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.780882   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0428 23:59:29.780886   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0428 23:59:29.781047   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0428 23:59:30.022766   36356 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0428 23:59:30.029806   36356 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0428 23:59:30.029872   36356 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0428 23:59:30.049513   36356 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0428 23:59:30.049537   36356 start.go:494] detecting cgroup driver to use...
	I0428 23:59:30.049602   36356 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0428 23:59:30.067833   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0428 23:59:30.084419   36356 docker.go:217] disabling cri-docker service (if available) ...
	I0428 23:59:30.084490   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0428 23:59:30.101260   36356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0428 23:59:30.118454   36356 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0428 23:59:30.245117   36356 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0428 23:59:30.402173   36356 docker.go:233] disabling docker service ...
	I0428 23:59:30.402240   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0428 23:59:30.419742   36356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0428 23:59:30.434799   36356 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0428 23:59:30.586310   36356 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0428 23:59:30.701297   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0428 23:59:30.717873   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0428 23:59:30.740576   36356 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0428 23:59:30.740637   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.755747   36356 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0428 23:59:30.755821   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.769519   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.783158   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.800160   36356 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0428 23:59:30.812526   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.824663   36356 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.845871   36356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0428 23:59:30.858527   36356 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0428 23:59:30.871070   36356 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0428 23:59:30.871116   36356 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0428 23:59:30.892560   36356 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0428 23:59:30.906892   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:59:31.047857   36356 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0428 23:59:31.608180   36356 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0428 23:59:31.608258   36356 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0428 23:59:31.613650   36356 start.go:562] Will wait 60s for crictl version
	I0428 23:59:31.613712   36356 ssh_runner.go:195] Run: which crictl
	I0428 23:59:31.618572   36356 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0428 23:59:31.667744   36356 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0428 23:59:31.667841   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:59:31.698887   36356 ssh_runner.go:195] Run: crio --version
	I0428 23:59:31.732978   36356 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0428 23:59:31.734467   36356 out.go:177]   - env NO_PROXY=192.168.39.237
	I0428 23:59:31.735737   36356 out.go:177]   - env NO_PROXY=192.168.39.237,192.168.39.43
	I0428 23:59:31.736997   36356 main.go:141] libmachine: (ha-274394-m03) Calling .GetIP
	I0428 23:59:31.739814   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:31.740186   36356 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0428 23:59:31.740216   36356 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0428 23:59:31.740374   36356 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0428 23:59:31.745539   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:59:31.759169   36356 mustload.go:65] Loading cluster: ha-274394
	I0428 23:59:31.759367   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:59:31.759592   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:59:31.759625   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:59:31.774099   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36133
	I0428 23:59:31.774493   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:59:31.774982   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:59:31.775008   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:59:31.775303   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:59:31.775488   36356 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0428 23:59:31.777010   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:59:31.777277   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:59:31.777308   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:59:31.791488   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0428 23:59:31.791874   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:59:31.792798   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:59:31.792816   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:59:31.793108   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:59:31.793289   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:59:31.793448   36356 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394 for IP: 192.168.39.250
	I0428 23:59:31.793462   36356 certs.go:194] generating shared ca certs ...
	I0428 23:59:31.793482   36356 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:59:31.793619   36356 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0428 23:59:31.793657   36356 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0428 23:59:31.793665   36356 certs.go:256] generating profile certs ...
	I0428 23:59:31.793730   36356 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key
	I0428 23:59:31.793754   36356 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.293e4005
	I0428 23:59:31.793767   36356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.293e4005 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237 192.168.39.43 192.168.39.250 192.168.39.254]
	I0428 23:59:31.935877   36356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.293e4005 ...
	I0428 23:59:31.935910   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.293e4005: {Name:mkb1d55f40172ee8436492fe8f68a99e68fc03c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:59:31.936096   36356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.293e4005 ...
	I0428 23:59:31.936114   36356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.293e4005: {Name:mka939da220f505a93b36da1922b3c1aa6b40303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:59:31.936219   36356 certs.go:381] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.293e4005 -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt
	I0428 23:59:31.936357   36356 certs.go:385] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.293e4005 -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key
	I0428 23:59:31.936484   36356 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key
	I0428 23:59:31.936501   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0428 23:59:31.936513   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0428 23:59:31.936526   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0428 23:59:31.936537   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0428 23:59:31.936547   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0428 23:59:31.936557   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0428 23:59:31.936567   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0428 23:59:31.936577   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0428 23:59:31.936618   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0428 23:59:31.936644   36356 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0428 23:59:31.936653   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0428 23:59:31.936674   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0428 23:59:31.936698   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0428 23:59:31.936718   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0428 23:59:31.936753   36356 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0428 23:59:31.936778   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /usr/share/ca-certificates/207272.pem
	I0428 23:59:31.936791   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:59:31.936803   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem -> /usr/share/ca-certificates/20727.pem
	I0428 23:59:31.936841   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:59:31.939693   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:59:31.940099   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:59:31.940126   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:59:31.940327   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:59:31.940489   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:59:31.940610   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:59:31.940730   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:59:32.014500   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0428 23:59:32.019823   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0428 23:59:32.035881   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0428 23:59:32.040779   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0428 23:59:32.057077   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0428 23:59:32.062345   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0428 23:59:32.076399   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0428 23:59:32.083489   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0428 23:59:32.100707   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0428 23:59:32.105612   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0428 23:59:32.119597   36356 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0428 23:59:32.124604   36356 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0428 23:59:32.138154   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0428 23:59:32.170086   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0428 23:59:32.198963   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0428 23:59:32.226907   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0428 23:59:32.253636   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0428 23:59:32.278554   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0428 23:59:32.304986   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0428 23:59:32.331547   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0428 23:59:32.359245   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0428 23:59:32.388215   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0428 23:59:32.415247   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0428 23:59:32.442937   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0428 23:59:32.462088   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0428 23:59:32.485219   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0428 23:59:32.505466   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0428 23:59:32.524971   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0428 23:59:32.543452   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0428 23:59:32.561866   36356 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0428 23:59:32.580866   36356 ssh_runner.go:195] Run: openssl version
	I0428 23:59:32.587281   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0428 23:59:32.599497   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0428 23:59:32.604579   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0428 23:59:32.604620   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0428 23:59:32.610534   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0428 23:59:32.622113   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0428 23:59:32.633898   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:59:32.639105   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:59:32.639152   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0428 23:59:32.645086   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0428 23:59:32.656327   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0428 23:59:32.667538   36356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0428 23:59:32.672551   36356 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0428 23:59:32.672585   36356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0428 23:59:32.679007   36356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0428 23:59:32.691035   36356 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0428 23:59:32.695662   36356 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0428 23:59:32.695716   36356 kubeadm.go:928] updating node {m03 192.168.39.250 8443 v1.30.0 crio true true} ...
	I0428 23:59:32.695808   36356 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-274394-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0428 23:59:32.695835   36356 kube-vip.go:111] generating kube-vip config ...
	I0428 23:59:32.695872   36356 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0428 23:59:32.712399   36356 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0428 23:59:32.712452   36356 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0428 23:59:32.712493   36356 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0428 23:59:32.722354   36356 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0428 23:59:32.722390   36356 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0428 23:59:32.733638   36356 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0428 23:59:32.733661   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0428 23:59:32.733670   36356 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0428 23:59:32.733674   36356 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0428 23:59:32.733720   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0428 23:59:32.733727   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0428 23:59:32.733688   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0428 23:59:32.733893   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0428 23:59:32.743978   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0428 23:59:32.744010   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0428 23:59:32.754412   36356 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0428 23:59:32.754479   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0428 23:59:32.754511   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0428 23:59:32.754529   36356 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0428 23:59:32.811837   36356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0428 23:59:32.811888   36356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0428 23:59:33.742700   36356 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0428 23:59:33.754911   36356 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0428 23:59:33.775851   36356 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0428 23:59:33.794540   36356 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0428 23:59:33.812285   36356 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0428 23:59:33.816564   36356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0428 23:59:33.831086   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:59:33.959822   36356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:59:33.979990   36356 host.go:66] Checking if "ha-274394" exists ...
	I0428 23:59:33.980339   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:59:33.980381   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:59:33.995673   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0428 23:59:33.996168   36356 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:59:33.996792   36356 main.go:141] libmachine: Using API Version  1
	I0428 23:59:33.996826   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:59:33.997145   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:59:33.997356   36356 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0428 23:59:33.997488   36356 start.go:316] joinCluster: &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:59:33.997595   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0428 23:59:33.997609   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0428 23:59:34.000823   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:59:34.001251   36356 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0428 23:59:34.001282   36356 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0428 23:59:34.001440   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0428 23:59:34.001603   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0428 23:59:34.001747   36356 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0428 23:59:34.001869   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0428 23:59:34.163965   36356 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:59:34.164013   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xjl1hv.qr8jswflfz5d4crm --discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-274394-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443"
	I0428 23:59:57.753188   36356 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xjl1hv.qr8jswflfz5d4crm --discovery-token-ca-cert-hash sha256:c90111446aabe3a401a65d4b7a9a9a2168cf750db867b339079fc02c5d132a33 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-274394-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443": (23.589146569s)
	I0428 23:59:57.753234   36356 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0428 23:59:58.433673   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-274394-m03 minikube.k8s.io/updated_at=2024_04_28T23_59_58_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328 minikube.k8s.io/name=ha-274394 minikube.k8s.io/primary=false
	I0428 23:59:58.575531   36356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-274394-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0428 23:59:58.699252   36356 start.go:318] duration metric: took 24.701760658s to joinCluster
	I0428 23:59:58.699325   36356 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0428 23:59:58.700772   36356 out.go:177] * Verifying Kubernetes components...
	I0428 23:59:58.699710   36356 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:59:58.702011   36356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0428 23:59:58.992947   36356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0428 23:59:59.039022   36356 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:59:59.039347   36356 kapi.go:59] client config for ha-274394: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.crt", KeyFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key", CAFile:"/home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0428 23:59:59.039423   36356 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.237:8443
	I0428 23:59:59.039662   36356 node_ready.go:35] waiting up to 6m0s for node "ha-274394-m03" to be "Ready" ...
	I0428 23:59:59.039739   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0428 23:59:59.039749   36356 round_trippers.go:469] Request Headers:
	I0428 23:59:59.039761   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:59:59.039769   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:59:59.044215   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0428 23:59:59.540563   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0428 23:59:59.540595   36356 round_trippers.go:469] Request Headers:
	I0428 23:59:59.540605   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0428 23:59:59.540611   36356 round_trippers.go:473]     Accept: application/json, */*
	I0428 23:59:59.545042   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:00.040363   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:00.040387   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:00.040395   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:00.040399   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:00.052445   36356 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 00:00:00.540532   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:00.540560   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:00.540570   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:00.540575   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:00.544839   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:01.040082   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:01.040105   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:01.040113   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:01.040116   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:01.043721   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:01.044695   36356 node_ready.go:53] node "ha-274394-m03" has status "Ready":"False"
	I0429 00:00:01.540152   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:01.540175   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:01.540182   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:01.540185   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:01.544113   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:02.040892   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:02.040915   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:02.040926   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:02.040933   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:02.045818   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:02.539953   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:02.539976   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:02.539983   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:02.539988   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:02.544199   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:03.040237   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:03.040265   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:03.040276   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:03.040282   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:03.045325   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:03.046199   36356 node_ready.go:53] node "ha-274394-m03" has status "Ready":"False"
	I0429 00:00:03.540612   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:03.540637   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:03.540650   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:03.540654   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:03.545488   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:04.040836   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:04.040868   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:04.040887   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:04.040895   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:04.046280   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:04.540501   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:04.540527   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:04.540544   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:04.540551   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:04.544986   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:05.040393   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:05.040420   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:05.040429   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:05.040437   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:05.045045   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:05.046512   36356 node_ready.go:53] node "ha-274394-m03" has status "Ready":"False"
	I0429 00:00:05.540289   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:05.540310   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:05.540316   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:05.540320   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:05.545290   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:06.040177   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:06.040197   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:06.040203   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:06.040207   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:06.051024   36356 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 00:00:06.540102   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:06.540136   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:06.540144   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:06.540148   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:06.544263   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:07.039962   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:07.039982   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.039990   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.039995   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.044076   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:07.540106   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:07.540132   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.540145   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.540151   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.543779   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.544580   36356 node_ready.go:49] node "ha-274394-m03" has status "Ready":"True"
	I0429 00:00:07.544602   36356 node_ready.go:38] duration metric: took 8.504923556s for node "ha-274394-m03" to be "Ready" ...
	I0429 00:00:07.544611   36356 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 00:00:07.544667   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0429 00:00:07.544677   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.544684   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.544687   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.553653   36356 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 00:00:07.561698   36356 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.561817   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rslhx
	I0429 00:00:07.561827   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.561838   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.561847   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.565506   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.566518   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:07.566534   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.566540   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.566545   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.569905   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.570543   36356 pod_ready.go:92] pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:07.570562   36356 pod_ready.go:81] duration metric: took 8.831944ms for pod "coredns-7db6d8ff4d-rslhx" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.570571   36356 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.570622   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xkdcv
	I0429 00:00:07.570630   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.570636   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.570640   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.574999   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:07.576125   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:07.576146   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.576157   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.576161   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.580599   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:07.581239   36356 pod_ready.go:92] pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:07.581256   36356 pod_ready.go:81] duration metric: took 10.67917ms for pod "coredns-7db6d8ff4d-xkdcv" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.581274   36356 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.581333   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394
	I0429 00:00:07.581342   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.581394   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.581408   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.589396   36356 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 00:00:07.590368   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:07.590389   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.590396   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.590401   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.593754   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.594461   36356 pod_ready.go:92] pod "etcd-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:07.594479   36356 pod_ready.go:81] duration metric: took 13.196822ms for pod "etcd-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.594491   36356 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.594550   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m02
	I0429 00:00:07.594561   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.594571   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.594579   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.598205   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.598968   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:07.598989   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.598997   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.599003   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.602493   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:07.603262   36356 pod_ready.go:92] pod "etcd-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:07.603287   36356 pod_ready.go:81] duration metric: took 8.787518ms for pod "etcd-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.603300   36356 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:07.740735   36356 request.go:629] Waited for 137.335456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:07.740793   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:07.740799   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.740806   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.740810   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.744904   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:07.940222   36356 request.go:629] Waited for 194.103628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:07.940293   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:07.940300   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:07.940311   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:07.940320   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:07.944388   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:08.140733   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:08.140759   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:08.140768   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:08.140772   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:08.146200   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:08.340435   36356 request.go:629] Waited for 193.229978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:08.340527   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:08.340535   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:08.340548   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:08.340554   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:08.348074   36356 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 00:00:08.603651   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:08.603675   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:08.603684   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:08.603690   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:08.607841   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:08.740178   36356 request.go:629] Waited for 131.240032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:08.740243   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:08.740251   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:08.740262   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:08.740269   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:08.744106   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:09.104560   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:09.104580   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:09.104587   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:09.104593   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:09.109646   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:09.141091   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:09.141121   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:09.141133   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:09.141140   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:09.145148   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:09.604524   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:09.606661   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:09.606683   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:09.606690   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:09.612435   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:09.614907   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:09.614925   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:09.614932   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:09.614936   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:09.618703   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:09.619548   36356 pod_ready.go:102] pod "etcd-ha-274394-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 00:00:10.103536   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:10.103614   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:10.103630   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:10.103638   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:10.109798   36356 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 00:00:10.111485   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:10.111504   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:10.111515   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:10.111522   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:10.115064   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:10.604281   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:10.604310   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:10.604320   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:10.604325   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:10.609142   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:10.610464   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:10.610484   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:10.610495   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:10.610500   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:10.616566   36356 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 00:00:11.103607   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:11.103632   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:11.103642   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:11.103646   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:11.107835   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:11.108744   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:11.108761   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:11.108768   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:11.108772   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:11.111862   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:11.603444   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:11.603463   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:11.603470   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:11.603475   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:11.607641   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:11.608655   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:11.608670   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:11.608682   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:11.608686   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:11.612140   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:12.104123   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:12.104145   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:12.104152   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:12.104156   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:12.108430   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:12.109205   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:12.109221   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:12.109228   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:12.109234   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:12.113119   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:12.113849   36356 pod_ready.go:102] pod "etcd-ha-274394-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 00:00:12.604327   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:12.604352   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:12.604363   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:12.604367   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:12.609419   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:12.610396   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:12.610415   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:12.610424   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:12.610429   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:12.613931   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:13.103988   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:13.104013   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:13.104020   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:13.104024   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:13.108252   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:13.109306   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:13.109326   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:13.109336   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:13.109342   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:13.113607   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:13.603763   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:13.603785   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:13.603795   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:13.603800   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:13.608023   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:13.608911   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:13.608963   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:13.608978   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:13.608983   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:13.612695   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.104203   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:14.104224   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.104233   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.104238   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.110775   36356 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 00:00:14.111901   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:14.111922   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.111932   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.111937   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.124604   36356 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 00:00:14.125329   36356 pod_ready.go:102] pod "etcd-ha-274394-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 00:00:14.603835   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-ha-274394-m03
	I0429 00:00:14.605874   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.605892   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.605898   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.610827   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:14.611910   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:14.611925   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.611932   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.611936   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.616291   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:14.616947   36356 pod_ready.go:92] pod "etcd-ha-274394-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:14.616966   36356 pod_ready.go:81] duration metric: took 7.0136586s for pod "etcd-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.616984   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.617056   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-274394
	I0429 00:00:14.617065   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.617072   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.617077   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.620266   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.620999   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:14.621015   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.621022   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.621028   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.624239   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.624743   36356 pod_ready.go:92] pod "kube-apiserver-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:14.624766   36356 pod_ready.go:81] duration metric: took 7.774137ms for pod "kube-apiserver-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.624778   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.624845   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-274394-m02
	I0429 00:00:14.624856   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.624864   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.624868   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.628427   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.629157   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:14.629170   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.629177   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.629180   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.632260   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.632852   36356 pod_ready.go:92] pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:14.632874   36356 pod_ready.go:81] duration metric: took 8.087549ms for pod "kube-apiserver-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.632887   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.632958   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-274394-m03
	I0429 00:00:14.632969   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.632979   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.632988   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.636284   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:14.740734   36356 request.go:629] Waited for 103.678901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:14.740817   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:14.740831   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.740841   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.740846   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.773084   36356 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0429 00:00:14.773907   36356 pod_ready.go:92] pod "kube-apiserver-ha-274394-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:14.773924   36356 pod_ready.go:81] duration metric: took 141.027444ms for pod "kube-apiserver-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.773933   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:14.940281   36356 request.go:629] Waited for 166.264237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394
	I0429 00:00:14.940343   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394
	I0429 00:00:14.940349   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:14.940360   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:14.940365   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:14.944365   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:15.140345   36356 request.go:629] Waited for 195.163934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:15.140423   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:15.140431   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:15.140439   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:15.140444   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:15.144062   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:15.144848   36356 pod_ready.go:92] pod "kube-controller-manager-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:15.144866   36356 pod_ready.go:81] duration metric: took 370.926651ms for pod "kube-controller-manager-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:15.144875   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:15.340256   36356 request.go:629] Waited for 195.311171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394-m02
	I0429 00:00:15.340322   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394-m02
	I0429 00:00:15.340327   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:15.340355   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:15.340361   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:15.347923   36356 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 00:00:15.540947   36356 request.go:629] Waited for 191.397388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:15.541007   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:15.541019   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:15.541026   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:15.541034   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:15.545131   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:15.545894   36356 pod_ready.go:92] pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:15.545917   36356 pod_ready.go:81] duration metric: took 401.034522ms for pod "kube-controller-manager-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:15.545930   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:15.740918   36356 request.go:629] Waited for 194.911345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394-m03
	I0429 00:00:15.741006   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-274394-m03
	I0429 00:00:15.741012   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:15.741021   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:15.741028   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:15.746377   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:15.940547   36356 request.go:629] Waited for 193.382908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:15.940633   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:15.940646   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:15.940656   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:15.940661   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:15.946477   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:15.947225   36356 pod_ready.go:92] pod "kube-controller-manager-ha-274394-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:15.947250   36356 pod_ready.go:81] duration metric: took 401.3069ms for pod "kube-controller-manager-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:15.947264   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4rb7k" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:16.140188   36356 request.go:629] Waited for 192.839853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rb7k
	I0429 00:00:16.140294   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rb7k
	I0429 00:00:16.140312   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:16.140324   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:16.140332   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:16.145774   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:16.340228   36356 request.go:629] Waited for 193.697798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:16.340310   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:16.340317   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:16.340329   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:16.340339   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:16.344612   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:16.345393   36356 pod_ready.go:92] pod "kube-proxy-4rb7k" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:16.345411   36356 pod_ready.go:81] duration metric: took 398.139664ms for pod "kube-proxy-4rb7k" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:16.345423   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g95c9" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:16.540629   36356 request.go:629] Waited for 195.13398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95c9
	I0429 00:00:16.540716   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95c9
	I0429 00:00:16.540728   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:16.540738   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:16.540747   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:16.545764   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:16.740862   36356 request.go:629] Waited for 194.341193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:16.740912   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:16.740917   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:16.740924   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:16.740928   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:16.744945   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:16.745580   36356 pod_ready.go:92] pod "kube-proxy-g95c9" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:16.745613   36356 pod_ready.go:81] duration metric: took 400.179822ms for pod "kube-proxy-g95c9" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:16.745629   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pwbfs" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:16.940209   36356 request.go:629] Waited for 194.512821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwbfs
	I0429 00:00:16.940295   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pwbfs
	I0429 00:00:16.940311   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:16.940321   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:16.940332   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:16.944152   36356 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 00:00:17.140530   36356 request.go:629] Waited for 195.395948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:17.140617   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:17.140631   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:17.140641   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:17.140648   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:17.146669   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:17.147419   36356 pod_ready.go:92] pod "kube-proxy-pwbfs" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:17.147443   36356 pod_ready.go:81] duration metric: took 401.8052ms for pod "kube-proxy-pwbfs" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:17.147454   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:17.340500   36356 request.go:629] Waited for 192.965416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394
	I0429 00:00:17.340634   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394
	I0429 00:00:17.340647   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:17.340655   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:17.340670   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:17.344716   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:17.540937   36356 request.go:629] Waited for 195.33873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:17.541019   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394
	I0429 00:00:17.541031   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:17.541042   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:17.541052   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:17.545973   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:17.546666   36356 pod_ready.go:92] pod "kube-scheduler-ha-274394" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:17.546694   36356 pod_ready.go:81] duration metric: took 399.233302ms for pod "kube-scheduler-ha-274394" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:17.546708   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:17.740796   36356 request.go:629] Waited for 193.996654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m02
	I0429 00:00:17.740856   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m02
	I0429 00:00:17.740862   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:17.740869   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:17.740872   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:17.745935   36356 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 00:00:17.940323   36356 request.go:629] Waited for 193.289642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:17.940410   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m02
	I0429 00:00:17.940419   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:17.940429   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:17.940440   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:17.945223   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:17.946163   36356 pod_ready.go:92] pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:17.946184   36356 pod_ready.go:81] duration metric: took 399.468104ms for pod "kube-scheduler-ha-274394-m02" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:17.946193   36356 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:18.140175   36356 request.go:629] Waited for 193.91684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m03
	I0429 00:00:18.140243   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-274394-m03
	I0429 00:00:18.140248   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.140276   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.140289   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.145196   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:18.340751   36356 request.go:629] Waited for 194.401601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:18.340842   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/ha-274394-m03
	I0429 00:00:18.340851   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.340862   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.340873   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.344942   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:18.345590   36356 pod_ready.go:92] pod "kube-scheduler-ha-274394-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 00:00:18.345611   36356 pod_ready.go:81] duration metric: took 399.411796ms for pod "kube-scheduler-ha-274394-m03" in "kube-system" namespace to be "Ready" ...
	I0429 00:00:18.345623   36356 pod_ready.go:38] duration metric: took 10.801003405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 00:00:18.345641   36356 api_server.go:52] waiting for apiserver process to appear ...
	I0429 00:00:18.345710   36356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:00:18.374589   36356 api_server.go:72] duration metric: took 19.675227263s to wait for apiserver process to appear ...
	I0429 00:00:18.374620   36356 api_server.go:88] waiting for apiserver healthz status ...
	I0429 00:00:18.374648   36356 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0429 00:00:18.379661   36356 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0429 00:00:18.379729   36356 round_trippers.go:463] GET https://192.168.39.237:8443/version
	I0429 00:00:18.379740   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.379754   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.379761   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.380937   36356 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 00:00:18.381043   36356 api_server.go:141] control plane version: v1.30.0
	I0429 00:00:18.381061   36356 api_server.go:131] duration metric: took 6.434791ms to wait for apiserver health ...
	I0429 00:00:18.381069   36356 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 00:00:18.540471   36356 request.go:629] Waited for 159.310973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0429 00:00:18.540534   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0429 00:00:18.540539   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.540546   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.540552   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.553803   36356 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 00:00:18.561130   36356 system_pods.go:59] 24 kube-system pods found
	I0429 00:00:18.561159   36356 system_pods.go:61] "coredns-7db6d8ff4d-rslhx" [b73501ce-7591-45a5-b59e-331f7752c71b] Running
	I0429 00:00:18.561164   36356 system_pods.go:61] "coredns-7db6d8ff4d-xkdcv" [60272694-edd8-4a8c-abd9-707cdb1355ea] Running
	I0429 00:00:18.561167   36356 system_pods.go:61] "etcd-ha-274394" [e951aad6-16ba-42de-bcb6-a90ec5388fc8] Running
	I0429 00:00:18.561171   36356 system_pods.go:61] "etcd-ha-274394-m02" [63565823-56bf-4bd7-b8da-604a1b0d4d30] Running
	I0429 00:00:18.561174   36356 system_pods.go:61] "etcd-ha-274394-m03" [64d0cf43-d3cd-4054-b44a-e8b4f8a70b06] Running
	I0429 00:00:18.561176   36356 system_pods.go:61] "kindnet-29qlf" [915875ab-c1aa-46d6-b5e1-b6a7eff8dd64] Running
	I0429 00:00:18.561179   36356 system_pods.go:61] "kindnet-6qf7q" [f00be25f-cefa-41ac-8c38-1d52f337e8b9] Running
	I0429 00:00:18.561182   36356 system_pods.go:61] "kindnet-p6qmw" [528219cb-5850-471c-97de-c31dcca693b1] Running
	I0429 00:00:18.561185   36356 system_pods.go:61] "kube-apiserver-ha-274394" [f20281d2-0f10-43b0-9a51-495d03b5a5c3] Running
	I0429 00:00:18.561188   36356 system_pods.go:61] "kube-apiserver-ha-274394-m02" [0f8b7b21-a990-447f-a3b8-6acdccf078d3] Running
	I0429 00:00:18.561191   36356 system_pods.go:61] "kube-apiserver-ha-274394-m03" [a9546d9d-7c2a-45c4-a0a5-a5efea4a04d9] Running
	I0429 00:00:18.561194   36356 system_pods.go:61] "kube-controller-manager-ha-274394" [8fb69743-3a7b-4fad-838c-a45e1667724c] Running
	I0429 00:00:18.561197   36356 system_pods.go:61] "kube-controller-manager-ha-274394-m02" [429f2ab6-9771-47b2-b827-d183897f6276] Running
	I0429 00:00:18.561200   36356 system_pods.go:61] "kube-controller-manager-ha-274394-m03" [f4094095-5c0c-4fb7-9c76-fb63e6c6eeb2] Running
	I0429 00:00:18.561203   36356 system_pods.go:61] "kube-proxy-4rb7k" [de261499-d4f2-44b0-869b-28ae3505f19f] Running
	I0429 00:00:18.561205   36356 system_pods.go:61] "kube-proxy-g95c9" [5be866d8-0014-44c7-a4cd-e93655e9c599] Running
	I0429 00:00:18.561209   36356 system_pods.go:61] "kube-proxy-pwbfs" [5303f947-6c3f-47b5-b396-33b92049d48f] Running
	I0429 00:00:18.561212   36356 system_pods.go:61] "kube-scheduler-ha-274394" [22d206f5-49cc-43d0-939e-249961518bb4] Running
	I0429 00:00:18.561214   36356 system_pods.go:61] "kube-scheduler-ha-274394-m02" [3371d359-adb1-4111-8ae1-44934bad24c3] Running
	I0429 00:00:18.561217   36356 system_pods.go:61] "kube-scheduler-ha-274394-m03" [7084f6de-4070-4d9b-b313-4b52f51123c7] Running
	I0429 00:00:18.561220   36356 system_pods.go:61] "kube-vip-ha-274394" [ce6151de-754a-4f15-94d4-71f4fb9cbd21] Running
	I0429 00:00:18.561222   36356 system_pods.go:61] "kube-vip-ha-274394-m02" [f276f128-37bf-4f93-a573-e6b491fa66cd] Running
	I0429 00:00:18.561225   36356 system_pods.go:61] "kube-vip-ha-274394-m03" [bd6c2740-2068-4849-a23b-56d9ce0ac21c] Running
	I0429 00:00:18.561227   36356 system_pods.go:61] "storage-provisioner" [b291d6ca-3a9b-4dd0-b0e9-a183347e7d26] Running
	I0429 00:00:18.561232   36356 system_pods.go:74] duration metric: took 180.158592ms to wait for pod list to return data ...
	I0429 00:00:18.561240   36356 default_sa.go:34] waiting for default service account to be created ...
	I0429 00:00:18.740670   36356 request.go:629] Waited for 179.356953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0429 00:00:18.740727   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0429 00:00:18.740739   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.740746   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.740750   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.745356   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:18.745474   36356 default_sa.go:45] found service account: "default"
	I0429 00:00:18.745492   36356 default_sa.go:55] duration metric: took 184.245419ms for default service account to be created ...
	I0429 00:00:18.745502   36356 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 00:00:18.940811   36356 request.go:629] Waited for 195.241974ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0429 00:00:18.940863   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0429 00:00:18.940868   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:18.940874   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:18.940886   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:18.950591   36356 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 00:00:18.957766   36356 system_pods.go:86] 24 kube-system pods found
	I0429 00:00:18.957805   36356 system_pods.go:89] "coredns-7db6d8ff4d-rslhx" [b73501ce-7591-45a5-b59e-331f7752c71b] Running
	I0429 00:00:18.957815   36356 system_pods.go:89] "coredns-7db6d8ff4d-xkdcv" [60272694-edd8-4a8c-abd9-707cdb1355ea] Running
	I0429 00:00:18.957821   36356 system_pods.go:89] "etcd-ha-274394" [e951aad6-16ba-42de-bcb6-a90ec5388fc8] Running
	I0429 00:00:18.957830   36356 system_pods.go:89] "etcd-ha-274394-m02" [63565823-56bf-4bd7-b8da-604a1b0d4d30] Running
	I0429 00:00:18.957836   36356 system_pods.go:89] "etcd-ha-274394-m03" [64d0cf43-d3cd-4054-b44a-e8b4f8a70b06] Running
	I0429 00:00:18.957844   36356 system_pods.go:89] "kindnet-29qlf" [915875ab-c1aa-46d6-b5e1-b6a7eff8dd64] Running
	I0429 00:00:18.957851   36356 system_pods.go:89] "kindnet-6qf7q" [f00be25f-cefa-41ac-8c38-1d52f337e8b9] Running
	I0429 00:00:18.957859   36356 system_pods.go:89] "kindnet-p6qmw" [528219cb-5850-471c-97de-c31dcca693b1] Running
	I0429 00:00:18.957872   36356 system_pods.go:89] "kube-apiserver-ha-274394" [f20281d2-0f10-43b0-9a51-495d03b5a5c3] Running
	I0429 00:00:18.957880   36356 system_pods.go:89] "kube-apiserver-ha-274394-m02" [0f8b7b21-a990-447f-a3b8-6acdccf078d3] Running
	I0429 00:00:18.957897   36356 system_pods.go:89] "kube-apiserver-ha-274394-m03" [a9546d9d-7c2a-45c4-a0a5-a5efea4a04d9] Running
	I0429 00:00:18.957905   36356 system_pods.go:89] "kube-controller-manager-ha-274394" [8fb69743-3a7b-4fad-838c-a45e1667724c] Running
	I0429 00:00:18.957913   36356 system_pods.go:89] "kube-controller-manager-ha-274394-m02" [429f2ab6-9771-47b2-b827-d183897f6276] Running
	I0429 00:00:18.957924   36356 system_pods.go:89] "kube-controller-manager-ha-274394-m03" [f4094095-5c0c-4fb7-9c76-fb63e6c6eeb2] Running
	I0429 00:00:18.957932   36356 system_pods.go:89] "kube-proxy-4rb7k" [de261499-d4f2-44b0-869b-28ae3505f19f] Running
	I0429 00:00:18.957940   36356 system_pods.go:89] "kube-proxy-g95c9" [5be866d8-0014-44c7-a4cd-e93655e9c599] Running
	I0429 00:00:18.957947   36356 system_pods.go:89] "kube-proxy-pwbfs" [5303f947-6c3f-47b5-b396-33b92049d48f] Running
	I0429 00:00:18.957956   36356 system_pods.go:89] "kube-scheduler-ha-274394" [22d206f5-49cc-43d0-939e-249961518bb4] Running
	I0429 00:00:18.957968   36356 system_pods.go:89] "kube-scheduler-ha-274394-m02" [3371d359-adb1-4111-8ae1-44934bad24c3] Running
	I0429 00:00:18.957976   36356 system_pods.go:89] "kube-scheduler-ha-274394-m03" [7084f6de-4070-4d9b-b313-4b52f51123c7] Running
	I0429 00:00:18.957987   36356 system_pods.go:89] "kube-vip-ha-274394" [ce6151de-754a-4f15-94d4-71f4fb9cbd21] Running
	I0429 00:00:18.957996   36356 system_pods.go:89] "kube-vip-ha-274394-m02" [f276f128-37bf-4f93-a573-e6b491fa66cd] Running
	I0429 00:00:18.958005   36356 system_pods.go:89] "kube-vip-ha-274394-m03" [bd6c2740-2068-4849-a23b-56d9ce0ac21c] Running
	I0429 00:00:18.958014   36356 system_pods.go:89] "storage-provisioner" [b291d6ca-3a9b-4dd0-b0e9-a183347e7d26] Running
	I0429 00:00:18.958039   36356 system_pods.go:126] duration metric: took 212.530081ms to wait for k8s-apps to be running ...
	I0429 00:00:18.958053   36356 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 00:00:18.958113   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:00:18.980447   36356 system_svc.go:56] duration metric: took 22.384449ms WaitForService to wait for kubelet
	I0429 00:00:18.980482   36356 kubeadm.go:576] duration metric: took 20.281123012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 00:00:18.980513   36356 node_conditions.go:102] verifying NodePressure condition ...
	I0429 00:00:19.140458   36356 request.go:629] Waited for 159.863326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes
	I0429 00:00:19.140539   36356 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes
	I0429 00:00:19.140546   36356 round_trippers.go:469] Request Headers:
	I0429 00:00:19.140556   36356 round_trippers.go:473]     Accept: application/json, */*
	I0429 00:00:19.140562   36356 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 00:00:19.145258   36356 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 00:00:19.146414   36356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 00:00:19.146436   36356 node_conditions.go:123] node cpu capacity is 2
	I0429 00:00:19.146451   36356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 00:00:19.146457   36356 node_conditions.go:123] node cpu capacity is 2
	I0429 00:00:19.146462   36356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 00:00:19.146466   36356 node_conditions.go:123] node cpu capacity is 2
	I0429 00:00:19.146472   36356 node_conditions.go:105] duration metric: took 165.952797ms to run NodePressure ...
	I0429 00:00:19.146487   36356 start.go:240] waiting for startup goroutines ...
	I0429 00:00:19.146521   36356 start.go:254] writing updated cluster config ...
	I0429 00:00:19.146849   36356 ssh_runner.go:195] Run: rm -f paused
	I0429 00:00:19.201608   36356 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 00:00:19.204410   36356 out.go:177] * Done! kubectl is now configured to use "ha-274394" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.240149097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349087240125200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a0c2752-fd94-40f7-b340-072e0419d88a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.240865597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d220afe7-38fc-4661-944a-243b33b13ab1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.241002170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d220afe7-38fc-4661-944a-243b33b13ab1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.241450935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714348823628567057,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661893773308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661892775278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c766c3729b062ad9523a21758b7f93223bf47884319719f155df69e0c878c0d,PodSandboxId:f1817cc9d2fb29d92226070e777d7f2664e9716deffbfd22958ef7ad13f68141,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714348661665524227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229d446ccd2c11d44847ea2f5bb4f2085af2a5709495d0b888fc1d58d8389627,PodSandboxId:24061593c71f1368aae369b932213e75732db79a91d1d67f1141cc04179081c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143486
59851649944,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714348659690846296,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1144436f5b67a8616a5245d67f5f5000b19f39fd4aaa77c30a19d3feaf8eb036,PodSandboxId:f4f6e257f8d6f474550047de14882591cd7346735aaf472bb6094237b186f38f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714348643069752665,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb76ef860db5fc6bc2bb141383bf5a5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714348640048328200,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714348639992200258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9,PodSandboxId:d6f56935776d1dcd78c5fabfd595024640090664bcf02dab3ffe43581c3d1931,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714348639895135304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f,PodSandboxId:974770f9d2d8d35da0a33f54f885619933ec20d5542b45b5d69d7ad325a6cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714348639938867551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d220afe7-38fc-4661-944a-243b33b13ab1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.306101163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b2d816c-26a5-461f-8a4e-065023519171 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.306208691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b2d816c-26a5-461f-8a4e-065023519171 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.307806503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1801b49e-56e3-40d8-bb65-e7e38ff1c641 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.308602216Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349087308566013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1801b49e-56e3-40d8-bb65-e7e38ff1c641 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.309366215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4906760-7f2c-4e23-895c-2a28526c1f50 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.309426836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4906760-7f2c-4e23-895c-2a28526c1f50 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.309702124Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714348823628567057,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661893773308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661892775278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c766c3729b062ad9523a21758b7f93223bf47884319719f155df69e0c878c0d,PodSandboxId:f1817cc9d2fb29d92226070e777d7f2664e9716deffbfd22958ef7ad13f68141,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714348661665524227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229d446ccd2c11d44847ea2f5bb4f2085af2a5709495d0b888fc1d58d8389627,PodSandboxId:24061593c71f1368aae369b932213e75732db79a91d1d67f1141cc04179081c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143486
59851649944,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714348659690846296,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1144436f5b67a8616a5245d67f5f5000b19f39fd4aaa77c30a19d3feaf8eb036,PodSandboxId:f4f6e257f8d6f474550047de14882591cd7346735aaf472bb6094237b186f38f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714348643069752665,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb76ef860db5fc6bc2bb141383bf5a5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714348640048328200,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714348639992200258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9,PodSandboxId:d6f56935776d1dcd78c5fabfd595024640090664bcf02dab3ffe43581c3d1931,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714348639895135304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f,PodSandboxId:974770f9d2d8d35da0a33f54f885619933ec20d5542b45b5d69d7ad325a6cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714348639938867551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4906760-7f2c-4e23-895c-2a28526c1f50 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.362893113Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0810c950-4c86-4795-829a-fefc9021deaa name=/runtime.v1.RuntimeService/Version
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.363201282Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0810c950-4c86-4795-829a-fefc9021deaa name=/runtime.v1.RuntimeService/Version
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.364626673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6629b142-4eb4-4396-9e58-68a5a7694f9f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.365235845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349087365209824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6629b142-4eb4-4396-9e58-68a5a7694f9f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.365692402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a275003-34d2-4f25-b2de-b8968ee5d196 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.365773259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a275003-34d2-4f25-b2de-b8968ee5d196 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.366085478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714348823628567057,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661893773308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661892775278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c766c3729b062ad9523a21758b7f93223bf47884319719f155df69e0c878c0d,PodSandboxId:f1817cc9d2fb29d92226070e777d7f2664e9716deffbfd22958ef7ad13f68141,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714348661665524227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229d446ccd2c11d44847ea2f5bb4f2085af2a5709495d0b888fc1d58d8389627,PodSandboxId:24061593c71f1368aae369b932213e75732db79a91d1d67f1141cc04179081c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143486
59851649944,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714348659690846296,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1144436f5b67a8616a5245d67f5f5000b19f39fd4aaa77c30a19d3feaf8eb036,PodSandboxId:f4f6e257f8d6f474550047de14882591cd7346735aaf472bb6094237b186f38f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714348643069752665,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb76ef860db5fc6bc2bb141383bf5a5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714348640048328200,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714348639992200258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9,PodSandboxId:d6f56935776d1dcd78c5fabfd595024640090664bcf02dab3ffe43581c3d1931,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714348639895135304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f,PodSandboxId:974770f9d2d8d35da0a33f54f885619933ec20d5542b45b5d69d7ad325a6cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714348639938867551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a275003-34d2-4f25-b2de-b8968ee5d196 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.410726306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7dd1ec5-958d-4cf9-9b59-92f87b8e993d name=/runtime.v1.RuntimeService/Version
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.410810361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7dd1ec5-958d-4cf9-9b59-92f87b8e993d name=/runtime.v1.RuntimeService/Version
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.412684665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bccffa4-3376-46d1-aa3f-7ebba90d14ea name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.413232050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349087413195003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bccffa4-3376-46d1-aa3f-7ebba90d14ea name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.414479563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92dfc383-b422-4dfd-b5b1-bf1811acccb6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.414539990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92dfc383-b422-4dfd-b5b1-bf1811acccb6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:04:47 ha-274394 crio[683]: time="2024-04-29 00:04:47.415241284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714348823628567057,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661893773308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714348661892775278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c766c3729b062ad9523a21758b7f93223bf47884319719f155df69e0c878c0d,PodSandboxId:f1817cc9d2fb29d92226070e777d7f2664e9716deffbfd22958ef7ad13f68141,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714348661665524227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229d446ccd2c11d44847ea2f5bb4f2085af2a5709495d0b888fc1d58d8389627,PodSandboxId:24061593c71f1368aae369b932213e75732db79a91d1d67f1141cc04179081c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17143486
59851649944,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714348659690846296,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1144436f5b67a8616a5245d67f5f5000b19f39fd4aaa77c30a19d3feaf8eb036,PodSandboxId:f4f6e257f8d6f474550047de14882591cd7346735aaf472bb6094237b186f38f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714348643069752665,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb76ef860db5fc6bc2bb141383bf5a5,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714348640048328200,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714348639992200258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9,PodSandboxId:d6f56935776d1dcd78c5fabfd595024640090664bcf02dab3ffe43581c3d1931,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714348639895135304,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f,PodSandboxId:974770f9d2d8d35da0a33f54f885619933ec20d5542b45b5d69d7ad325a6cf21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714348639938867551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92dfc383-b422-4dfd-b5b1-bf1811acccb6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6191db59237ab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   7dc34422a092b       busybox-fc5497c4f-wwl6p
	39cef99138b5e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   86b45c3768b5c       coredns-7db6d8ff4d-rslhx
	4b75dd2cf8167       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   0a16b0222b334       coredns-7db6d8ff4d-xkdcv
	2c766c3729b06       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   f1817cc9d2fb2       storage-provisioner
	229d446ccd2c1       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   24061593c71f1       kindnet-p6qmw
	10c90fba42aa7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago       Running             kube-proxy                0                   fe59c57afd7dc       kube-proxy-pwbfs
	1144436f5b67a       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   f4f6e257f8d6f       kube-vip-ha-274394
	a2665b4434106       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   9792afe7047da       etcd-ha-274394
	cd7d63b0cf58d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago       Running             kube-scheduler            0                   fb9c09a8e5609       kube-scheduler-ha-274394
	d4d50ed07ba22       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago       Running             kube-apiserver            0                   974770f9d2d8d       kube-apiserver-ha-274394
	ec35813faf9fb       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago       Running             kube-controller-manager   0                   d6f56935776d1       kube-controller-manager-ha-274394
	
	
	==> coredns [39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e] <==
	[INFO] 10.244.2.2:36735 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169991s
	[INFO] 10.244.1.2:33891 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130376s
	[INFO] 10.244.1.2:52014 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135334s
	[INFO] 10.244.1.2:38829 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00166462s
	[INFO] 10.244.1.2:60722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098874s
	[INFO] 10.244.0.4:48543 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092957s
	[INFO] 10.244.0.4:57804 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001823584s
	[INFO] 10.244.0.4:33350 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106647s
	[INFO] 10.244.0.4:39835 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220436s
	[INFO] 10.244.0.4:34474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060725s
	[INFO] 10.244.0.4:42677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076278s
	[INFO] 10.244.2.2:41566 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146322s
	[INFO] 10.244.2.2:39633 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160447s
	[INFO] 10.244.2.2:36533 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123881s
	[INFO] 10.244.1.2:54710 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162932s
	[INFO] 10.244.1.2:59010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096219s
	[INFO] 10.244.1.2:39468 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158565s
	[INFO] 10.244.0.4:45378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179168s
	[INFO] 10.244.0.4:52678 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091044s
	[INFO] 10.244.2.2:46078 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195018s
	[INFO] 10.244.2.2:47504 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268349s
	[INFO] 10.244.1.2:34168 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000161101s
	[INFO] 10.244.0.4:52891 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148878s
	[INFO] 10.244.0.4:43079 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155917s
	[INFO] 10.244.0.4:46898 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114218s
	
	
	==> coredns [4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a] <==
	[INFO] 10.244.2.2:46937 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.035529764s
	[INFO] 10.244.2.2:48074 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.014240201s
	[INFO] 10.244.1.2:36196 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000184659s
	[INFO] 10.244.1.2:52009 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000114923s
	[INFO] 10.244.0.4:54740 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000078827s
	[INFO] 10.244.0.4:52614 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00194917s
	[INFO] 10.244.2.2:33162 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162402s
	[INFO] 10.244.2.2:57592 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.023066556s
	[INFO] 10.244.2.2:57043 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000235049s
	[INFO] 10.244.1.2:47075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014599s
	[INFO] 10.244.1.2:60870 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002072779s
	[INFO] 10.244.1.2:46861 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094825s
	[INFO] 10.244.1.2:46908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186676s
	[INFO] 10.244.0.4:60188 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001709235s
	[INFO] 10.244.0.4:43834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109382s
	[INFO] 10.244.2.2:42186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000296079s
	[INFO] 10.244.1.2:44715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184251s
	[INFO] 10.244.0.4:45543 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116414s
	[INFO] 10.244.0.4:47556 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083226s
	[INFO] 10.244.2.2:59579 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198403s
	[INFO] 10.244.2.2:42196 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000278968s
	[INFO] 10.244.1.2:34121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222019s
	[INFO] 10.244.1.2:54334 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016838s
	[INFO] 10.244.1.2:37434 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099473s
	[INFO] 10.244.0.4:58711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000413259s
	
	
	==> describe nodes <==
	Name:               ha-274394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T23_57_27_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:57:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:04:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:00:31 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:00:31 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:00:31 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:00:31 +0000   Sun, 28 Apr 2024 23:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    ha-274394
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbc86a402e5548caa48d259a39be78de
	  System UUID:                bbc86a40-2e55-48ca-a48d-259a39be78de
	  Boot ID:                    b8dfffb5-63e7-4c7e-8e52-3cf4873fed01
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwl6p              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 coredns-7db6d8ff4d-rslhx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m8s
	  kube-system                 coredns-7db6d8ff4d-xkdcv             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m8s
	  kube-system                 etcd-ha-274394                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m22s
	  kube-system                 kindnet-p6qmw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m9s
	  kube-system                 kube-apiserver-ha-274394             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 kube-controller-manager-ha-274394    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-proxy-pwbfs                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-scheduler-ha-274394             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 kube-vip-ha-274394                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m7s                   kube-proxy       
	  Normal  NodeHasSufficientPID     7m28s (x6 over 7m28s)  kubelet          Node ha-274394 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m28s (x7 over 7m28s)  kubelet          Node ha-274394 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s (x6 over 7m28s)  kubelet          Node ha-274394 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m21s                  kubelet          Node ha-274394 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m21s                  kubelet          Node ha-274394 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m21s                  kubelet          Node ha-274394 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m9s                   node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal  NodeReady                7m6s                   kubelet          Node ha-274394 status is now: NodeReady
	  Normal  RegisteredNode           5m54s                  node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal  RegisteredNode           4m34s                  node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	
	
	Name:               ha-274394-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T23_58_39_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:58:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:01:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 00:00:38 +0000   Mon, 29 Apr 2024 00:02:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 00:00:38 +0000   Mon, 29 Apr 2024 00:02:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 00:00:38 +0000   Mon, 29 Apr 2024 00:02:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 00:00:38 +0000   Mon, 29 Apr 2024 00:02:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-274394-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b55609ff590f4bdba17fff0e954879c9
	  System UUID:                b55609ff-590f-4bdb-a17f-ff0e954879c9
	  Boot ID:                    855b13e2-38c0-4157-be3d-1ab6ccd7558c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tmk6v                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 etcd-ha-274394-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m9s
	  kube-system                 kindnet-6qf7q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m11s
	  kube-system                 kube-apiserver-ha-274394-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-controller-manager-ha-274394-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-proxy-g95c9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-scheduler-ha-274394-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-vip-ha-274394-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m7s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  6m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m11s (x8 over 6m12s)  kubelet          Node ha-274394-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s (x8 over 6m12s)  kubelet          Node ha-274394-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m11s (x7 over 6m12s)  kubelet          Node ha-274394-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m9s                   node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           5m54s                  node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           4m34s                  node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  NodeNotReady             2m38s                  node-controller  Node ha-274394-m02 status is now: NodeNotReady
	
	
	Name:               ha-274394-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T23_59_58_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:59:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:04:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:00:25 +0000   Sun, 28 Apr 2024 23:59:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:00:25 +0000   Sun, 28 Apr 2024 23:59:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:00:25 +0000   Sun, 28 Apr 2024 23:59:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:00:25 +0000   Mon, 29 Apr 2024 00:00:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-274394-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d93714f0c10b4313b4406039da06a844
	  System UUID:                d93714f0-c10b-4313-b440-6039da06a844
	  Boot ID:                    f3fcc183-a68b-4912-a90c-8983fd2d233d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kjcqn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 etcd-ha-274394-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m51s
	  kube-system                 kindnet-29qlf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m53s
	  kube-system                 kube-apiserver-ha-274394-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-controller-manager-ha-274394-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-proxy-4rb7k                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-scheduler-ha-274394-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-vip-ha-274394-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m53s (x8 over 4m53s)  kubelet          Node ha-274394-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s (x8 over 4m53s)  kubelet          Node ha-274394-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s (x7 over 4m53s)  kubelet          Node ha-274394-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m49s                  node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	  Normal  RegisteredNode           4m49s                  node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	  Normal  RegisteredNode           4m34s                  node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	
	
	Name:               ha-274394-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T00_00_59_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:00:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:04:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:01:29 +0000   Mon, 29 Apr 2024 00:00:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:01:29 +0000   Mon, 29 Apr 2024 00:00:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:01:29 +0000   Mon, 29 Apr 2024 00:00:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:01:29 +0000   Mon, 29 Apr 2024 00:01:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    ha-274394-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eda4c6845a404536baab34c56e482672
	  System UUID:                eda4c684-5a40-4536-baab-34c56e482672
	  Boot ID:                    3678260a-6c98-4396-a49b-11d148407cb5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-r7wp2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m49s
	  kube-system                 kube-proxy-4h24n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m49s (x2 over 3m49s)  kubelet          Node ha-274394-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x2 over 3m49s)  kubelet          Node ha-274394-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x2 over 3m49s)  kubelet          Node ha-274394-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m44s                  node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal  RegisteredNode           3m44s                  node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal  NodeReady                3m37s                  kubelet          Node ha-274394-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr28 23:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052234] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044810] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.652996] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.522934] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Apr28 23:57] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.108939] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.062174] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072067] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.188727] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.118445] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.277590] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +5.051195] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.066175] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.782579] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.939635] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.597447] systemd-fstab-generator[1372]: Ignoring "noauto" option for root device
	[  +0.110049] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.496389] kauditd_printk_skb: 21 callbacks suppressed
	[Apr28 23:58] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6] <==
	{"level":"warn","ts":"2024-04-29T00:04:47.32717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.427162Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.469473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.528022Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.627821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.721212Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.727977Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.733725Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.738272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.750154Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.75826Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.766765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.770592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.775095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.787294Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.79624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.806187Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.811679Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.844778Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.849097Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.854347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.865822Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.873588Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.880638Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T00:04:47.927711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3f0f97df8a50e0be","from":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:04:47 up 7 min,  0 users,  load average: 0.06, 0.19, 0.10
	Linux ha-274394 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [229d446ccd2c11d44847ea2f5bb4f2085af2a5709495d0b888fc1d58d8389627] <==
	I0429 00:04:11.424066       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:04:21.442567       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:04:21.442703       1 main.go:227] handling current node
	I0429 00:04:21.442738       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:04:21.442767       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:04:21.443095       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0429 00:04:21.443151       1 main.go:250] Node ha-274394-m03 has CIDR [10.244.2.0/24] 
	I0429 00:04:21.443243       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:04:21.443263       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:04:31.450036       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:04:31.450188       1 main.go:227] handling current node
	I0429 00:04:31.450224       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:04:31.450245       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:04:31.450362       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0429 00:04:31.450383       1 main.go:250] Node ha-274394-m03 has CIDR [10.244.2.0/24] 
	I0429 00:04:31.450445       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:04:31.450463       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:04:41.463500       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:04:41.463651       1 main.go:227] handling current node
	I0429 00:04:41.463685       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:04:41.463713       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:04:41.463832       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0429 00:04:41.463852       1 main.go:250] Node ha-274394-m03 has CIDR [10.244.2.0/24] 
	I0429 00:04:41.464088       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:04:41.464145       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f] <==
	E0428 23:58:36.868759       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0428 23:58:36.868653       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 13.607µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0428 23:58:36.870554       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0428 23:58:36.870719       1 timeout.go:142] post-timeout activity - time-elapsed: 2.339766ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0429 00:00:25.137040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40050: use of closed network connection
	E0429 00:00:25.379481       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40060: use of closed network connection
	E0429 00:00:25.604805       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40076: use of closed network connection
	E0429 00:00:25.871777       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40096: use of closed network connection
	E0429 00:00:26.091618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40108: use of closed network connection
	E0429 00:00:26.341033       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40122: use of closed network connection
	E0429 00:00:26.594661       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40132: use of closed network connection
	E0429 00:00:26.812137       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40148: use of closed network connection
	E0429 00:00:27.039810       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40166: use of closed network connection
	E0429 00:00:27.395521       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40194: use of closed network connection
	E0429 00:00:27.600587       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33016: use of closed network connection
	E0429 00:00:27.822892       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33036: use of closed network connection
	E0429 00:00:28.227332       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33060: use of closed network connection
	E0429 00:00:28.448673       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33082: use of closed network connection
	I0429 00:01:04.494213       1 trace.go:236] Trace[1312671356]: "Get" accept:application/json, */*,audit-id:59ccec51-1525-472a-acd9-d032d2c2bfbf,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Apr-2024 00:01:03.965) (total time: 528ms):
	Trace[1312671356]: ---"About to write a response" 528ms (00:01:04.494)
	Trace[1312671356]: [528.544822ms] [528.544822ms] END
	I0429 00:01:04.494822       1 trace.go:236] Trace[213141482]: "Update" accept:application/json, */*,audit-id:7b71b3b8-6c51-4af0-b485-7eb34cb112ec,client:192.168.39.237,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 00:01:03.939) (total time: 555ms):
	Trace[213141482]: ["GuaranteedUpdate etcd3" audit-id:7b71b3b8-6c51-4af0-b485-7eb34cb112ec,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 555ms (00:01:03.939)
	Trace[213141482]:  ---"Txn call completed" 554ms (00:01:04.494)]
	Trace[213141482]: [555.360928ms] [555.360928ms] END
	
	
	==> kube-controller-manager [ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9] <==
	I0428 23:58:38.037991       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-274394-m02"
	E0428 23:59:54.803847       1 certificate_controller.go:146] Sync csr-wdx26 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-wdx26": the object has been modified; please apply your changes to the latest version and try again
	I0428 23:59:54.892346       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-274394-m03\" does not exist"
	I0428 23:59:54.940848       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-274394-m03" podCIDRs=["10.244.2.0/24"]
	I0428 23:59:58.081684       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-274394-m03"
	I0429 00:00:20.257387       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.850346ms"
	I0429 00:00:20.462634       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="204.923458ms"
	I0429 00:00:20.642183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="179.010358ms"
	E0429 00:00:20.642327       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0429 00:00:20.663485       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.009121ms"
	I0429 00:00:20.664416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.99µs"
	I0429 00:00:24.116794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.639622ms"
	I0429 00:00:24.116983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.564µs"
	I0429 00:00:24.173133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.902432ms"
	I0429 00:00:24.173525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.207µs"
	I0429 00:00:24.532136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.587076ms"
	I0429 00:00:24.534021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="133.27µs"
	E0429 00:00:58.696735       1 certificate_controller.go:146] Sync csr-9ztb7 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-9ztb7": the object has been modified; please apply your changes to the latest version and try again
	I0429 00:00:58.906478       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-274394-m04\" does not exist"
	I0429 00:00:59.000800       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-274394-m04" podCIDRs=["10.244.3.0/24"]
	I0429 00:01:03.130368       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-274394-m04"
	I0429 00:01:10.233626       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-274394-m04"
	I0429 00:02:09.531221       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-274394-m04"
	I0429 00:02:09.671054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.148922ms"
	I0429 00:02:09.671354       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.538µs"
	
	
	==> kube-proxy [10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a] <==
	I0428 23:57:40.051962       1 server_linux.go:69] "Using iptables proxy"
	I0428 23:57:40.064077       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.237"]
	I0428 23:57:40.189337       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0428 23:57:40.189412       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0428 23:57:40.189431       1 server_linux.go:165] "Using iptables Proxier"
	I0428 23:57:40.192878       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0428 23:57:40.193163       1 server.go:872] "Version info" version="v1.30.0"
	I0428 23:57:40.193199       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0428 23:57:40.194579       1 config.go:192] "Starting service config controller"
	I0428 23:57:40.194626       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0428 23:57:40.194648       1 config.go:101] "Starting endpoint slice config controller"
	I0428 23:57:40.194651       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0428 23:57:40.195219       1 config.go:319] "Starting node config controller"
	I0428 23:57:40.195253       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0428 23:57:40.301194       1 shared_informer.go:320] Caches are synced for node config
	I0428 23:57:40.301247       1 shared_informer.go:320] Caches are synced for service config
	I0428 23:57:40.301268       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1] <==
	W0428 23:57:23.781454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0428 23:57:23.781572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0428 23:57:23.911255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0428 23:57:23.911314       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0428 23:57:23.977138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0428 23:57:23.977200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0428 23:57:24.127191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0428 23:57:24.127258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0428 23:57:24.129643       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0428 23:57:24.129701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0428 23:57:24.145975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0428 23:57:24.146030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0428 23:57:24.181216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0428 23:57:24.181243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0428 23:57:24.192501       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0428 23:57:24.192554       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0428 23:57:26.404129       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 00:00:20.266398       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wwl6p\": pod busybox-fc5497c4f-wwl6p is already assigned to node \"ha-274394\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wwl6p" node="ha-274394"
	E0429 00:00:20.266508       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kjcqn\": pod busybox-fc5497c4f-kjcqn is already assigned to node \"ha-274394-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-kjcqn" node="ha-274394-m03"
	E0429 00:00:20.271638       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a6a06956-e991-47ab-986f-34d9467a7dec(default/busybox-fc5497c4f-wwl6p) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wwl6p"
	E0429 00:00:20.272546       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wwl6p\": pod busybox-fc5497c4f-wwl6p is already assigned to node \"ha-274394\"" pod="default/busybox-fc5497c4f-wwl6p"
	I0429 00:00:20.273228       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wwl6p" node="ha-274394"
	E0429 00:00:20.271544       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 76314c87-6b7d-4bfa-83ce-3ace75fa7aee(default/busybox-fc5497c4f-kjcqn) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-kjcqn"
	E0429 00:00:20.273888       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kjcqn\": pod busybox-fc5497c4f-kjcqn is already assigned to node \"ha-274394-m03\"" pod="default/busybox-fc5497c4f-kjcqn"
	I0429 00:00:20.274053       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-kjcqn" node="ha-274394-m03"
	
	
	==> kubelet <==
	Apr 29 00:00:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:00:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:00:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:00:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:00:28 ha-274394 kubelet[1379]: E0429 00:00:28.227727    1379 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52374->127.0.0.1:41399: write tcp 127.0.0.1:52374->127.0.0.1:41399: write: broken pipe
	Apr 29 00:01:26 ha-274394 kubelet[1379]: E0429 00:01:26.204575    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:01:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:01:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:01:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:01:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:02:26 ha-274394 kubelet[1379]: E0429 00:02:26.204755    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:02:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:02:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:02:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:02:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:03:26 ha-274394 kubelet[1379]: E0429 00:03:26.207355    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:03:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:03:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:03:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:03:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:04:26 ha-274394 kubelet[1379]: E0429 00:04:26.211104    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:04:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:04:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:04:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:04:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-274394 -n ha-274394
helpers_test.go:261: (dbg) Run:  kubectl --context ha-274394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (52.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (376.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-274394 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-274394 -v=7 --alsologtostderr
E0429 00:05:48.629435   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:06:16.313666   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-274394 -v=7 --alsologtostderr: exit status 82 (2m2.703060818s)

                                                
                                                
-- stdout --
	* Stopping node "ha-274394-m04"  ...
	* Stopping node "ha-274394-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:04:49.492241   42121 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:04:49.492406   42121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:49.492415   42121 out.go:304] Setting ErrFile to fd 2...
	I0429 00:04:49.492420   42121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:04:49.492622   42121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:04:49.492899   42121 out.go:298] Setting JSON to false
	I0429 00:04:49.492981   42121 mustload.go:65] Loading cluster: ha-274394
	I0429 00:04:49.493368   42121 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:04:49.493469   42121 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0429 00:04:49.493658   42121 mustload.go:65] Loading cluster: ha-274394
	I0429 00:04:49.493835   42121 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:04:49.493871   42121 stop.go:39] StopHost: ha-274394-m04
	I0429 00:04:49.494322   42121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:49.494382   42121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:49.509534   42121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I0429 00:04:49.509969   42121 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:49.510519   42121 main.go:141] libmachine: Using API Version  1
	I0429 00:04:49.510543   42121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:49.510911   42121 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:49.513405   42121 out.go:177] * Stopping node "ha-274394-m04"  ...
	I0429 00:04:49.514482   42121 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 00:04:49.514524   42121 main.go:141] libmachine: (ha-274394-m04) Calling .DriverName
	I0429 00:04:49.514748   42121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 00:04:49.514787   42121 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHHostname
	I0429 00:04:49.517425   42121 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:49.517805   42121 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:00:45 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:04:49.517836   42121 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:04:49.517935   42121 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHPort
	I0429 00:04:49.518111   42121 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHKeyPath
	I0429 00:04:49.518264   42121 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHUsername
	I0429 00:04:49.518415   42121 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m04/id_rsa Username:docker}
	I0429 00:04:49.602886   42121 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 00:04:49.657692   42121 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 00:04:49.715135   42121 main.go:141] libmachine: Stopping "ha-274394-m04"...
	I0429 00:04:49.715162   42121 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:04:49.716778   42121 main.go:141] libmachine: (ha-274394-m04) Calling .Stop
	I0429 00:04:49.720227   42121 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 0/120
	I0429 00:04:50.721699   42121 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 1/120
	I0429 00:04:51.723470   42121 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:04:51.724746   42121 main.go:141] libmachine: Machine "ha-274394-m04" was stopped.
	I0429 00:04:51.724764   42121 stop.go:75] duration metric: took 2.210284585s to stop
	I0429 00:04:51.724807   42121 stop.go:39] StopHost: ha-274394-m03
	I0429 00:04:51.725131   42121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:04:51.725185   42121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:04:51.740589   42121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0429 00:04:51.741126   42121 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:04:51.741661   42121 main.go:141] libmachine: Using API Version  1
	I0429 00:04:51.741684   42121 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:04:51.741977   42121 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:04:51.744052   42121 out.go:177] * Stopping node "ha-274394-m03"  ...
	I0429 00:04:51.745600   42121 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 00:04:51.745629   42121 main.go:141] libmachine: (ha-274394-m03) Calling .DriverName
	I0429 00:04:51.745861   42121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 00:04:51.745885   42121 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHHostname
	I0429 00:04:51.748732   42121 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:51.749180   42121 main.go:141] libmachine: (ha-274394-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:4c:dd", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:59:15 +0000 UTC Type:0 Mac:52:54:00:0d:4c:dd Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-274394-m03 Clientid:01:52:54:00:0d:4c:dd}
	I0429 00:04:51.749231   42121 main.go:141] libmachine: (ha-274394-m03) DBG | domain ha-274394-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:0d:4c:dd in network mk-ha-274394
	I0429 00:04:51.749362   42121 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHPort
	I0429 00:04:51.749526   42121 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHKeyPath
	I0429 00:04:51.749631   42121 main.go:141] libmachine: (ha-274394-m03) Calling .GetSSHUsername
	I0429 00:04:51.749780   42121 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m03/id_rsa Username:docker}
	I0429 00:04:51.838939   42121 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 00:04:51.894687   42121 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 00:04:51.950165   42121 main.go:141] libmachine: Stopping "ha-274394-m03"...
	I0429 00:04:51.950193   42121 main.go:141] libmachine: (ha-274394-m03) Calling .GetState
	I0429 00:04:51.951637   42121 main.go:141] libmachine: (ha-274394-m03) Calling .Stop
	I0429 00:04:51.954702   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 0/120
	I0429 00:04:52.956010   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 1/120
	I0429 00:04:53.958142   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 2/120
	I0429 00:04:54.959483   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 3/120
	I0429 00:04:55.961148   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 4/120
	I0429 00:04:56.963094   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 5/120
	I0429 00:04:57.964680   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 6/120
	I0429 00:04:58.966199   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 7/120
	I0429 00:04:59.967598   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 8/120
	I0429 00:05:00.968905   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 9/120
	I0429 00:05:01.970294   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 10/120
	I0429 00:05:02.972510   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 11/120
	I0429 00:05:03.973840   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 12/120
	I0429 00:05:04.975526   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 13/120
	I0429 00:05:05.977099   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 14/120
	I0429 00:05:06.978931   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 15/120
	I0429 00:05:07.980283   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 16/120
	I0429 00:05:08.981809   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 17/120
	I0429 00:05:09.983319   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 18/120
	I0429 00:05:10.985080   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 19/120
	I0429 00:05:11.986822   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 20/120
	I0429 00:05:12.988736   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 21/120
	I0429 00:05:13.989970   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 22/120
	I0429 00:05:14.991499   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 23/120
	I0429 00:05:15.992854   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 24/120
	I0429 00:05:16.994534   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 25/120
	I0429 00:05:17.996003   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 26/120
	I0429 00:05:18.997683   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 27/120
	I0429 00:05:19.999304   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 28/120
	I0429 00:05:21.000652   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 29/120
	I0429 00:05:22.002551   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 30/120
	I0429 00:05:23.003848   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 31/120
	I0429 00:05:24.005584   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 32/120
	I0429 00:05:25.006998   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 33/120
	I0429 00:05:26.008366   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 34/120
	I0429 00:05:27.010087   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 35/120
	I0429 00:05:28.011806   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 36/120
	I0429 00:05:29.013341   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 37/120
	I0429 00:05:30.014447   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 38/120
	I0429 00:05:31.015869   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 39/120
	I0429 00:05:32.017646   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 40/120
	I0429 00:05:33.019213   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 41/120
	I0429 00:05:34.020302   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 42/120
	I0429 00:05:35.021918   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 43/120
	I0429 00:05:36.023076   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 44/120
	I0429 00:05:37.024934   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 45/120
	I0429 00:05:38.026639   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 46/120
	I0429 00:05:39.028503   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 47/120
	I0429 00:05:40.029672   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 48/120
	I0429 00:05:41.030813   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 49/120
	I0429 00:05:42.032483   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 50/120
	I0429 00:05:43.034620   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 51/120
	I0429 00:05:44.036045   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 52/120
	I0429 00:05:45.037317   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 53/120
	I0429 00:05:46.038587   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 54/120
	I0429 00:05:47.040140   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 55/120
	I0429 00:05:48.041419   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 56/120
	I0429 00:05:49.042747   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 57/120
	I0429 00:05:50.044154   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 58/120
	I0429 00:05:51.045337   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 59/120
	I0429 00:05:52.047059   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 60/120
	I0429 00:05:53.048310   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 61/120
	I0429 00:05:54.049692   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 62/120
	I0429 00:05:55.050936   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 63/120
	I0429 00:05:56.052229   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 64/120
	I0429 00:05:57.053981   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 65/120
	I0429 00:05:58.055285   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 66/120
	I0429 00:05:59.056605   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 67/120
	I0429 00:06:00.057931   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 68/120
	I0429 00:06:01.059108   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 69/120
	I0429 00:06:02.060810   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 70/120
	I0429 00:06:03.062125   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 71/120
	I0429 00:06:04.063417   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 72/120
	I0429 00:06:05.064715   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 73/120
	I0429 00:06:06.065987   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 74/120
	I0429 00:06:07.067675   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 75/120
	I0429 00:06:08.069086   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 76/120
	I0429 00:06:09.070502   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 77/120
	I0429 00:06:10.072515   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 78/120
	I0429 00:06:11.073843   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 79/120
	I0429 00:06:12.075562   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 80/120
	I0429 00:06:13.076990   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 81/120
	I0429 00:06:14.078455   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 82/120
	I0429 00:06:15.079891   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 83/120
	I0429 00:06:16.082142   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 84/120
	I0429 00:06:17.083954   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 85/120
	I0429 00:06:18.085343   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 86/120
	I0429 00:06:19.086691   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 87/120
	I0429 00:06:20.088148   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 88/120
	I0429 00:06:21.089410   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 89/120
	I0429 00:06:22.091339   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 90/120
	I0429 00:06:23.092808   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 91/120
	I0429 00:06:24.094920   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 92/120
	I0429 00:06:25.096363   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 93/120
	I0429 00:06:26.097741   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 94/120
	I0429 00:06:27.099605   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 95/120
	I0429 00:06:28.101010   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 96/120
	I0429 00:06:29.102403   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 97/120
	I0429 00:06:30.103824   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 98/120
	I0429 00:06:31.105200   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 99/120
	I0429 00:06:32.107051   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 100/120
	I0429 00:06:33.108366   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 101/120
	I0429 00:06:34.109672   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 102/120
	I0429 00:06:35.111008   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 103/120
	I0429 00:06:36.112282   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 104/120
	I0429 00:06:37.113559   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 105/120
	I0429 00:06:38.114855   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 106/120
	I0429 00:06:39.116962   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 107/120
	I0429 00:06:40.118408   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 108/120
	I0429 00:06:41.119768   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 109/120
	I0429 00:06:42.121216   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 110/120
	I0429 00:06:43.122588   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 111/120
	I0429 00:06:44.124029   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 112/120
	I0429 00:06:45.125297   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 113/120
	I0429 00:06:46.126563   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 114/120
	I0429 00:06:47.128394   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 115/120
	I0429 00:06:48.129632   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 116/120
	I0429 00:06:49.130801   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 117/120
	I0429 00:06:50.132076   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 118/120
	I0429 00:06:51.133217   42121 main.go:141] libmachine: (ha-274394-m03) Waiting for machine to stop 119/120
	I0429 00:06:52.133848   42121 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 00:06:52.133888   42121 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0429 00:06:52.136494   42121 out.go:177] 
	W0429 00:06:52.137962   42121 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0429 00:06:52.137975   42121 out.go:239] * 
	* 
	W0429 00:06:52.139924   42121 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 00:06:52.141050   42121 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-274394 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-274394 --wait=true -v=7 --alsologtostderr
E0429 00:10:48.628792   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-274394 --wait=true -v=7 --alsologtostderr: (4m10.831552238s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-274394
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-274394 -n ha-274394
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-274394 logs -n 25: (2.168756021s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m02:/home/docker/cp-test_ha-274394-m03_ha-274394-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m02 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m03_ha-274394-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04:/home/docker/cp-test_ha-274394-m03_ha-274394-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m04 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m03_ha-274394-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp testdata/cp-test.txt                                                | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3174175435/001/cp-test_ha-274394-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394:/home/docker/cp-test_ha-274394-m04_ha-274394.txt                       |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394 sudo cat                                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394.txt                                 |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m02:/home/docker/cp-test_ha-274394-m04_ha-274394-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m02 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03:/home/docker/cp-test_ha-274394-m04_ha-274394-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m03 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-274394 node stop m02 -v=7                                                     | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-274394 node start m02 -v=7                                                    | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-274394 -v=7                                                           | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-274394 -v=7                                                                | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-274394 --wait=true -v=7                                                    | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:06 UTC | 29 Apr 24 00:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-274394                                                                | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:11 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 00:06:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 00:06:52.197194   42604 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:06:52.197450   42604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:06:52.197459   42604 out.go:304] Setting ErrFile to fd 2...
	I0429 00:06:52.197463   42604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:06:52.197634   42604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:06:52.198179   42604 out.go:298] Setting JSON to false
	I0429 00:06:52.199037   42604 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6556,"bootTime":1714342656,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 00:06:52.199094   42604 start.go:139] virtualization: kvm guest
	I0429 00:06:52.201431   42604 out.go:177] * [ha-274394] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 00:06:52.203314   42604 out.go:177]   - MINIKUBE_LOCATION=17977
	I0429 00:06:52.203339   42604 notify.go:220] Checking for updates...
	I0429 00:06:52.204757   42604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 00:06:52.206208   42604 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0429 00:06:52.207668   42604 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:06:52.208956   42604 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 00:06:52.210108   42604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 00:06:52.211774   42604 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:06:52.211851   42604 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 00:06:52.212232   42604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:06:52.212266   42604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:06:52.229812   42604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38651
	I0429 00:06:52.230244   42604 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:06:52.230708   42604 main.go:141] libmachine: Using API Version  1
	I0429 00:06:52.230726   42604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:06:52.231066   42604 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:06:52.231247   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:06:52.265051   42604 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 00:06:52.266590   42604 start.go:297] selected driver: kvm2
	I0429 00:06:52.266609   42604 start.go:901] validating driver "kvm2" against &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.106 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:06:52.266787   42604 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 00:06:52.267122   42604 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:06:52.267192   42604 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 00:06:52.281336   42604 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 00:06:52.282001   42604 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 00:06:52.282076   42604 cni.go:84] Creating CNI manager for ""
	I0429 00:06:52.282109   42604 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 00:06:52.282173   42604 start.go:340] cluster config:
	{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.106 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:06:52.282296   42604 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:06:52.284123   42604 out.go:177] * Starting "ha-274394" primary control-plane node in "ha-274394" cluster
	I0429 00:06:52.285418   42604 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:06:52.285468   42604 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 00:06:52.285482   42604 cache.go:56] Caching tarball of preloaded images
	I0429 00:06:52.285578   42604 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 00:06:52.285594   42604 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 00:06:52.285755   42604 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0429 00:06:52.286043   42604 start.go:360] acquireMachinesLock for ha-274394: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 00:06:52.286100   42604 start.go:364] duration metric: took 34.879µs to acquireMachinesLock for "ha-274394"
	I0429 00:06:52.286133   42604 start.go:96] Skipping create...Using existing machine configuration
	I0429 00:06:52.286144   42604 fix.go:54] fixHost starting: 
	I0429 00:06:52.286520   42604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:06:52.286562   42604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:06:52.300078   42604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42795
	I0429 00:06:52.300502   42604 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:06:52.300997   42604 main.go:141] libmachine: Using API Version  1
	I0429 00:06:52.301017   42604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:06:52.301343   42604 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:06:52.301531   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:06:52.301653   42604 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0429 00:06:52.303375   42604 fix.go:112] recreateIfNeeded on ha-274394: state=Running err=<nil>
	W0429 00:06:52.303398   42604 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 00:06:52.305378   42604 out.go:177] * Updating the running kvm2 "ha-274394" VM ...
	I0429 00:06:52.306864   42604 machine.go:94] provisionDockerMachine start ...
	I0429 00:06:52.306886   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:06:52.307062   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:52.309317   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.309707   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:52.309731   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.309874   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:06:52.310064   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.310212   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.310350   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:06:52.310497   42604 main.go:141] libmachine: Using SSH client type: native
	I0429 00:06:52.310669   42604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0429 00:06:52.310679   42604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 00:06:52.421035   42604 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-274394
	
	I0429 00:06:52.421070   42604 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0429 00:06:52.421324   42604 buildroot.go:166] provisioning hostname "ha-274394"
	I0429 00:06:52.421344   42604 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0429 00:06:52.421521   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:52.424131   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.424521   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:52.424550   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.424675   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:06:52.424869   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.425043   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.425197   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:06:52.425346   42604 main.go:141] libmachine: Using SSH client type: native
	I0429 00:06:52.425501   42604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0429 00:06:52.425512   42604 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-274394 && echo "ha-274394" | sudo tee /etc/hostname
	I0429 00:06:52.554357   42604 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-274394
	
	I0429 00:06:52.554389   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:52.557098   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.557469   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:52.557498   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.557723   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:06:52.557903   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.558087   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.558218   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:06:52.558402   42604 main.go:141] libmachine: Using SSH client type: native
	I0429 00:06:52.558579   42604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0429 00:06:52.558597   42604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-274394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-274394/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-274394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 00:06:52.671785   42604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:06:52.671822   42604 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0429 00:06:52.671858   42604 buildroot.go:174] setting up certificates
	I0429 00:06:52.671869   42604 provision.go:84] configureAuth start
	I0429 00:06:52.671879   42604 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0429 00:06:52.672124   42604 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:06:52.674876   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.675279   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:52.675300   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.675516   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:52.677499   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.677878   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:52.677905   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.678069   42604 provision.go:143] copyHostCerts
	I0429 00:06:52.678115   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:06:52.678160   42604 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0429 00:06:52.678173   42604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:06:52.678263   42604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0429 00:06:52.678389   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:06:52.678420   42604 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0429 00:06:52.678431   42604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:06:52.678473   42604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0429 00:06:52.678551   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:06:52.678569   42604 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0429 00:06:52.678573   42604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:06:52.678596   42604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0429 00:06:52.678659   42604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.ha-274394 san=[127.0.0.1 192.168.39.237 ha-274394 localhost minikube]
	I0429 00:06:53.068566   42604 provision.go:177] copyRemoteCerts
	I0429 00:06:53.068629   42604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 00:06:53.068656   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:53.071443   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:53.071902   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:53.071929   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:53.072090   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:06:53.072302   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:53.072483   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:06:53.072652   42604 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:06:53.158996   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 00:06:53.159079   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 00:06:53.198083   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 00:06:53.198169   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0429 00:06:53.239032   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 00:06:53.239097   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 00:06:53.281113   42604 provision.go:87] duration metric: took 609.230569ms to configureAuth
	I0429 00:06:53.281144   42604 buildroot.go:189] setting minikube options for container-runtime
	I0429 00:06:53.281434   42604 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:06:53.281522   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:53.284407   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:53.284880   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:53.284913   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:53.285091   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:06:53.285322   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:53.285503   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:53.285667   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:06:53.285839   42604 main.go:141] libmachine: Using SSH client type: native
	I0429 00:06:53.286079   42604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0429 00:06:53.286101   42604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 00:08:24.158742   42604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 00:08:24.158773   42604 machine.go:97] duration metric: took 1m31.851893107s to provisionDockerMachine
	I0429 00:08:24.158788   42604 start.go:293] postStartSetup for "ha-274394" (driver="kvm2")
	I0429 00:08:24.158805   42604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 00:08:24.158838   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.159184   42604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 00:08:24.159218   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:08:24.161934   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.162411   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.162454   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.162563   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:08:24.162746   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.162894   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:08:24.163019   42604 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:08:24.251014   42604 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 00:08:24.256642   42604 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 00:08:24.256670   42604 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0429 00:08:24.256753   42604 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0429 00:08:24.256837   42604 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0429 00:08:24.256849   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /etc/ssl/certs/207272.pem
	I0429 00:08:24.256934   42604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 00:08:24.267841   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:08:24.297386   42604 start.go:296] duration metric: took 138.583205ms for postStartSetup
	I0429 00:08:24.297435   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.297759   42604 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0429 00:08:24.297789   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:08:24.300119   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.300515   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.300542   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.300645   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:08:24.300816   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.300961   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:08:24.301108   42604 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	W0429 00:08:24.390728   42604 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0429 00:08:24.390750   42604 fix.go:56] duration metric: took 1m32.104607749s for fixHost
	I0429 00:08:24.390771   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:08:24.392977   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.393383   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.393416   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.393540   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:08:24.393724   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.393873   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.394041   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:08:24.394199   42604 main.go:141] libmachine: Using SSH client type: native
	I0429 00:08:24.394375   42604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0429 00:08:24.394385   42604 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 00:08:24.499552   42604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349304.449748376
	
	I0429 00:08:24.499574   42604 fix.go:216] guest clock: 1714349304.449748376
	I0429 00:08:24.499583   42604 fix.go:229] Guest: 2024-04-29 00:08:24.449748376 +0000 UTC Remote: 2024-04-29 00:08:24.39075762 +0000 UTC m=+92.239872716 (delta=58.990756ms)
	I0429 00:08:24.499622   42604 fix.go:200] guest clock delta is within tolerance: 58.990756ms
	I0429 00:08:24.499635   42604 start.go:83] releasing machines lock for "ha-274394", held for 1m32.213510999s
	I0429 00:08:24.499653   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.499896   42604 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:08:24.502347   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.502681   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.502706   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.502909   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.503480   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.503689   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.503771   42604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 00:08:24.503806   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:08:24.503915   42604 ssh_runner.go:195] Run: cat /version.json
	I0429 00:08:24.503935   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:08:24.506801   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.506825   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.507196   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.507241   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.507269   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.507286   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.507317   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:08:24.507516   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.507521   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:08:24.507674   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.507700   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:08:24.507822   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:08:24.507823   42604 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:08:24.507967   42604 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:08:24.609485   42604 ssh_runner.go:195] Run: systemctl --version
	I0429 00:08:24.616061   42604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 00:08:24.789399   42604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 00:08:24.797255   42604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 00:08:24.797340   42604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 00:08:24.807491   42604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 00:08:24.807521   42604 start.go:494] detecting cgroup driver to use...
	I0429 00:08:24.807586   42604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 00:08:24.825160   42604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 00:08:24.839957   42604 docker.go:217] disabling cri-docker service (if available) ...
	I0429 00:08:24.840003   42604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 00:08:24.854215   42604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 00:08:24.868170   42604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 00:08:25.030227   42604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 00:08:25.186975   42604 docker.go:233] disabling docker service ...
	I0429 00:08:25.187051   42604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 00:08:25.203862   42604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 00:08:25.218225   42604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 00:08:25.376978   42604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 00:08:25.535088   42604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 00:08:25.550812   42604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 00:08:25.571859   42604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 00:08:25.571907   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.583842   42604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 00:08:25.583898   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.595144   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.606128   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.617461   42604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 00:08:25.628739   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.640930   42604 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.654503   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.666026   42604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 00:08:25.676598   42604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 00:08:25.687486   42604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:08:25.846601   42604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 00:08:26.522677   42604 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 00:08:26.522736   42604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 00:08:26.528043   42604 start.go:562] Will wait 60s for crictl version
	I0429 00:08:26.528087   42604 ssh_runner.go:195] Run: which crictl
	I0429 00:08:26.532332   42604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 00:08:26.579797   42604 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 00:08:26.579862   42604 ssh_runner.go:195] Run: crio --version
	I0429 00:08:26.614566   42604 ssh_runner.go:195] Run: crio --version
	I0429 00:08:26.650706   42604 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 00:08:26.651948   42604 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:08:26.654818   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:26.655215   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:26.655252   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:26.655534   42604 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 00:08:26.660587   42604 kubeadm.go:877] updating cluster {Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.106 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 00:08:26.660721   42604 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:08:26.660758   42604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:08:26.709663   42604 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:08:26.709683   42604 crio.go:433] Images already preloaded, skipping extraction
	I0429 00:08:26.709726   42604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:08:26.750223   42604 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:08:26.750243   42604 cache_images.go:84] Images are preloaded, skipping loading
	I0429 00:08:26.750251   42604 kubeadm.go:928] updating node { 192.168.39.237 8443 v1.30.0 crio true true} ...
	I0429 00:08:26.750349   42604 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-274394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 00:08:26.750407   42604 ssh_runner.go:195] Run: crio config
	I0429 00:08:26.804174   42604 cni.go:84] Creating CNI manager for ""
	I0429 00:08:26.804196   42604 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 00:08:26.804205   42604 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 00:08:26.804229   42604 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-274394 NodeName:ha-274394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 00:08:26.804419   42604 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-274394"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 00:08:26.804444   42604 kube-vip.go:111] generating kube-vip config ...
	I0429 00:08:26.804482   42604 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 00:08:26.818406   42604 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 00:08:26.818523   42604 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 00:08:26.818587   42604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 00:08:26.830511   42604 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 00:08:26.830584   42604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 00:08:26.841867   42604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0429 00:08:26.861401   42604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 00:08:26.881198   42604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0429 00:08:26.901552   42604 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 00:08:26.920501   42604 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 00:08:26.932794   42604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:08:27.146743   42604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 00:08:27.166944   42604 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394 for IP: 192.168.39.237
	I0429 00:08:27.166964   42604 certs.go:194] generating shared ca certs ...
	I0429 00:08:27.166985   42604 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:08:27.167129   42604 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0429 00:08:27.167178   42604 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0429 00:08:27.167191   42604 certs.go:256] generating profile certs ...
	I0429 00:08:27.167261   42604 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key
	I0429 00:08:27.167286   42604 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.fb967c3e
	I0429 00:08:27.167296   42604 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.fb967c3e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237 192.168.39.43 192.168.39.250 192.168.39.254]
	I0429 00:08:27.279401   42604 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.fb967c3e ...
	I0429 00:08:27.279426   42604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.fb967c3e: {Name:mk1a57083afaac3908235246b81d4ca465b0a12f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:08:27.279607   42604 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.fb967c3e ...
	I0429 00:08:27.279622   42604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.fb967c3e: {Name:mk7360cee927f7f0e32d1159fbc68eac80a8e909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:08:27.279719   42604 certs.go:381] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.fb967c3e -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt
	I0429 00:08:27.279858   42604 certs.go:385] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.fb967c3e -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key
	I0429 00:08:27.279970   42604 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key
	I0429 00:08:27.279986   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 00:08:27.279998   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 00:08:27.280008   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 00:08:27.280021   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 00:08:27.280034   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 00:08:27.280043   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 00:08:27.280061   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 00:08:27.280073   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 00:08:27.280122   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0429 00:08:27.280152   42604 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0429 00:08:27.280161   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 00:08:27.280180   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0429 00:08:27.280200   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0429 00:08:27.280224   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0429 00:08:27.280262   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:08:27.280287   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:08:27.280301   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem -> /usr/share/ca-certificates/20727.pem
	I0429 00:08:27.280313   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /usr/share/ca-certificates/207272.pem
	I0429 00:08:27.280911   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 00:08:27.342679   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 00:08:27.370138   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 00:08:27.401268   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 00:08:27.429048   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 00:08:27.456869   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 00:08:27.492941   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 00:08:27.520962   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 00:08:27.547306   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 00:08:27.573710   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0429 00:08:27.628106   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0429 00:08:27.656098   42604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 00:08:27.680880   42604 ssh_runner.go:195] Run: openssl version
	I0429 00:08:27.688523   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 00:08:27.700994   42604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:08:27.706655   42604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:08:27.706707   42604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:08:27.713194   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 00:08:27.723967   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0429 00:08:27.737446   42604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0429 00:08:27.742959   42604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0429 00:08:27.743017   42604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0429 00:08:27.749953   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0429 00:08:27.761347   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0429 00:08:27.773738   42604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0429 00:08:27.781076   42604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0429 00:08:27.781129   42604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0429 00:08:27.788092   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 00:08:27.800347   42604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 00:08:27.805372   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 00:08:27.811683   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 00:08:27.818280   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 00:08:27.824765   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 00:08:27.831052   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 00:08:27.837321   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 00:08:27.843568   42604 kubeadm.go:391] StartCluster: {Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.106 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:08:27.843677   42604 cri.go:56] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 00:08:27.843723   42604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 00:08:27.893340   42604 cri.go:91] found id: "75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e"
	I0429 00:08:27.893370   42604 cri.go:91] found id: "ff0985c2cbc2faeb24fdbf451088ac783cea059c29266fd0634ff2631b9618a9"
	I0429 00:08:27.893376   42604 cri.go:91] found id: "774658ec1346c8dea1393ee857b30d7310ad67da3bfb33af7b0865061134263e"
	I0429 00:08:27.893381   42604 cri.go:91] found id: "8ec6505d955c2854cade67c18fbccd249cffceeae0c551bde8591ec4af4ca404"
	I0429 00:08:27.893385   42604 cri.go:91] found id: "b7f3af13cf11d4dfe1dca83c7ae580e606bd39ff5ca3aa2d712f7055006b40f5"
	I0429 00:08:27.893389   42604 cri.go:91] found id: "0bf681974a82a099157f031fd9f5b94ff7f7f4dab5438c9f3cfc78c297cd79c6"
	I0429 00:08:27.893394   42604 cri.go:91] found id: "39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e"
	I0429 00:08:27.893398   42604 cri.go:91] found id: "4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a"
	I0429 00:08:27.893419   42604 cri.go:91] found id: "10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a"
	I0429 00:08:27.893430   42604 cri.go:91] found id: "1144436f5b67a8616a5245d67f5f5000b19f39fd4aaa77c30a19d3feaf8eb036"
	I0429 00:08:27.893434   42604 cri.go:91] found id: "a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6"
	I0429 00:08:27.893438   42604 cri.go:91] found id: "cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1"
	I0429 00:08:27.893442   42604 cri.go:91] found id: "d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f"
	I0429 00:08:27.893447   42604 cri.go:91] found id: "ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9"
	I0429 00:08:27.893457   42604 cri.go:91] found id: ""
	I0429 00:08:27.893513   42604 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.894133760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9826ed3f-b18c-47dc-9844-fa3b9a94604d name=/runtime.v1.RuntimeService/Version
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.902814107Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=102b3df2-79f3-48b6-9421-7788f48f4984 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.903556698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349463903518268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=102b3df2-79f3-48b6-9421-7788f48f4984 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.905084309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e078e340-83e5-499e-a2ea-0ba1c7a89620 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.905241632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e078e340-83e5-499e-a2ea-0ba1c7a89620 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.905874170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb55e9c6d522d396d155ef1215247b959f12655839d45b9f564a878032f33c2f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714349408190899595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a7d4dbe869ca1caad7d20343cff3f78d02cdcb4175e5d816d03039baa9c0fa,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714349367205409094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b7729fd4b49c715b4212dd3334d99f7f4415b91a6e0ad04921eae5d66e2b84,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714349354194822067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d9114d32187eec17e5566f35807ed9bd3cc982b8cfe0c389bf72af6ef6679e,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714349351190899107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c0243bf3189cc2b4d6f357927410147bcabb14e3ca640327ff4909ec5d3814f,PodSandboxId:c39536af7def5fde4f18905cb572ec9d55b6bd50b254affafe0adcc82fb84a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714349346815077655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90230580bb8966b7fadfe92ba2a2195539fb6f674e409ec35f0dd02caefbf3bd,PodSandboxId:f72b9f6f6c6571f796d2f7e6082ee8d09f14b4e5d3c2668410288857008b3e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714349327340739238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b390e5e039b165a1793386b9ae3070,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:8b48a4004872d042c17a9da9d3e7497ebe9189415f3b97d651548e9f13d34c93,PodSandboxId:a9733b733641b82e35160f8b58f159969e4643b9a913bb0611d50ca82f550bc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714349313770969390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:0503917a1377777577015db4d0f48982e9b923c054e7727b257b6a6393c065f9,PodSandboxId:c29249a598bab8b65fe595d7748941377b5ed5da05e40d06fb7a192fcda58554,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313835630031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0fee281fb306c003ec9a71f9157ad69f8109a929efb701c3cd0ef9ee13c8ed,PodSandboxId:fbde7716d2c883035f5a1a77f8a386da7c83f981e74e07553f99b894451030b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313671699171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7fcfc456098f3763f49107505a52c0b80da11b3e9ee44354ed1edd20c7d5aed,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714349313659367759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714349313316428908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b573af7fe461ed3d8be8b298f3a913f7feda077f922877ea319297042d060e06,PodSandboxId:e244f4be4872c53f75efd9d9faadfd20cfbce91c6db7d0406fe33f4cfd429534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714349313516599131,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e97305
0889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a413dc9a5467e299b2594817dbaa37417dcd420f092104ce5e713101001ee224,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714349313460341969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a
88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5697620f655f6994310596c760aac93c16f112f25bd6c63bba0f603ccfe2983a,PodSandboxId:ae92a5ea253f1670e63f4a78d88b6f655ef42b85d459cf32473db6409d9ab5a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714349313372516151,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Ann
otations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714349307274996244,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kuber
netes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714348823628735354,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kuberne
tes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661894139184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661892853715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714348659691787871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1714348640048415540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedA
t:1714348639992830886,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e078e340-83e5-499e-a2ea-0ba1c7a89620 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.961878321Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c323e2a-4abb-4151-9a2c-a7d3273439bf name=/runtime.v1.RuntimeService/Version
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.962042483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c323e2a-4abb-4151-9a2c-a7d3273439bf name=/runtime.v1.RuntimeService/Version
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.970314575Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdb89296-78ef-4b58-9c17-00c4beb06b33 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.970877065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349463970847880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdb89296-78ef-4b58-9c17-00c4beb06b33 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.972503444Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b8e6742-27a8-4740-8656-6dfdc082b010 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.972590894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b8e6742-27a8-4740-8656-6dfdc082b010 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.973269529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb55e9c6d522d396d155ef1215247b959f12655839d45b9f564a878032f33c2f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714349408190899595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a7d4dbe869ca1caad7d20343cff3f78d02cdcb4175e5d816d03039baa9c0fa,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714349367205409094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b7729fd4b49c715b4212dd3334d99f7f4415b91a6e0ad04921eae5d66e2b84,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714349354194822067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d9114d32187eec17e5566f35807ed9bd3cc982b8cfe0c389bf72af6ef6679e,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714349351190899107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c0243bf3189cc2b4d6f357927410147bcabb14e3ca640327ff4909ec5d3814f,PodSandboxId:c39536af7def5fde4f18905cb572ec9d55b6bd50b254affafe0adcc82fb84a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714349346815077655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90230580bb8966b7fadfe92ba2a2195539fb6f674e409ec35f0dd02caefbf3bd,PodSandboxId:f72b9f6f6c6571f796d2f7e6082ee8d09f14b4e5d3c2668410288857008b3e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714349327340739238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b390e5e039b165a1793386b9ae3070,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:8b48a4004872d042c17a9da9d3e7497ebe9189415f3b97d651548e9f13d34c93,PodSandboxId:a9733b733641b82e35160f8b58f159969e4643b9a913bb0611d50ca82f550bc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714349313770969390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:0503917a1377777577015db4d0f48982e9b923c054e7727b257b6a6393c065f9,PodSandboxId:c29249a598bab8b65fe595d7748941377b5ed5da05e40d06fb7a192fcda58554,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313835630031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0fee281fb306c003ec9a71f9157ad69f8109a929efb701c3cd0ef9ee13c8ed,PodSandboxId:fbde7716d2c883035f5a1a77f8a386da7c83f981e74e07553f99b894451030b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313671699171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7fcfc456098f3763f49107505a52c0b80da11b3e9ee44354ed1edd20c7d5aed,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714349313659367759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714349313316428908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b573af7fe461ed3d8be8b298f3a913f7feda077f922877ea319297042d060e06,PodSandboxId:e244f4be4872c53f75efd9d9faadfd20cfbce91c6db7d0406fe33f4cfd429534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714349313516599131,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e97305
0889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a413dc9a5467e299b2594817dbaa37417dcd420f092104ce5e713101001ee224,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714349313460341969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a
88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5697620f655f6994310596c760aac93c16f112f25bd6c63bba0f603ccfe2983a,PodSandboxId:ae92a5ea253f1670e63f4a78d88b6f655ef42b85d459cf32473db6409d9ab5a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714349313372516151,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Ann
otations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714349307274996244,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kuber
netes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714348823628735354,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kuberne
tes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661894139184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661892853715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714348659691787871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1714348640048415540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedA
t:1714348639992830886,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b8e6742-27a8-4740-8656-6dfdc082b010 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.979086661Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=4f3998c8-a8d2-461f-a1f5-abb66ce160bd name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.982866006Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c39536af7def5fde4f18905cb572ec9d55b6bd50b254affafe0adcc82fb84a25,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-wwl6p,Uid:a6a06956-e991-47ab-986f-34d9467a7dec,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349346668765964,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T00:00:20.232551293Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f72b9f6f6c6571f796d2f7e6082ee8d09f14b4e5d3c2668410288857008b3e64,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-274394,Uid:b2b390e5e039b165a1793386b9ae3070,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1714349327231550694,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b390e5e039b165a1793386b9ae3070,},Annotations:map[string]string{kubernetes.io/config.hash: b2b390e5e039b165a1793386b9ae3070,kubernetes.io/config.seen: 2024-04-29T00:08:26.872175224Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c29249a598bab8b65fe595d7748941377b5ed5da05e40d06fb7a192fcda58554,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rslhx,Uid:b73501ce-7591-45a5-b59e-331f7752c71b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349313007497659,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04
-28T23:57:41.185036167Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fbde7716d2c883035f5a1a77f8a386da7c83f981e74e07553f99b894451030b1,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xkdcv,Uid:60272694-edd8-4a8c-abd9-707cdb1355ea,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312961751769,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-28T23:57:41.198044791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-274394,Uid:4efe96637929623fb8b0eb26a06bea4f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312923068075,Labels:map[string]strin
g{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.237:8443,kubernetes.io/config.hash: 4efe96637929623fb8b0eb26a06bea4f,kubernetes.io/config.seen: 2024-04-28T23:57:26.124687930Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312914492384,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-28T23:57:41.189571842Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&PodSandboxMetadata{Name:kube-controller-ma
nager-ha-274394,Uid:d48b86fddc4d5249a88aeb3e4377a6f7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312911522739,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d48b86fddc4d5249a88aeb3e4377a6f7,kubernetes.io/config.seen: 2024-04-28T23:57:26.124679832Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9733b733641b82e35160f8b58f159969e4643b9a913bb0611d50ca82f550bc8,Metadata:&PodSandboxMetadata{Name:kube-proxy-pwbfs,Uid:5303f947-6c3f-47b5-b396-33b92049d48f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312901752775,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-28T23:57:38.913288702Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e244f4be4872c53f75efd9d9faadfd20cfbce91c6db7d0406fe33f4cfd429534,Metadata:&PodSandboxMetadata{Name:etcd-ha-274394,Uid:2ada5cad8658d509e973050889a81f40,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312900876787,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.237:2379,kubernetes.io/config.hash: 2ada5cad8658d509e973050889a81f40,kubernetes.io/config.seen: 2024-04-28T23:57:26.124687065Z,kubernetes.io/config.source: file,},RuntimeHa
ndler:,},&PodSandbox{Id:ae92a5ea253f1670e63f4a78d88b6f655ef42b85d459cf32473db6409d9ab5a9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-274394,Uid:d2454bba76a07a5ac0349d2285d97e46,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312893425142,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d2454bba76a07a5ac0349d2285d97e46,kubernetes.io/config.seen: 2024-04-28T23:57:26.124682909Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&PodSandboxMetadata{Name:kindnet-p6qmw,Uid:528219cb-5850-471c-97de-c31dcca693b1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349306934681982,Labels:map[string]string{app: kindnet,controlle
r-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-28T23:57:38.921396163Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-wwl6p,Uid:a6a06956-e991-47ab-986f-34d9467a7dec,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714348820850311750,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T00:00:20.232551293Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xkdcv,Uid:60272694-edd8-4a8c-abd9-707cdb1355ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714348661515999462,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-28T23:57:41.198044791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rslhx,Uid:b73501ce-7591-45a5-b59e-331f7752c71b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714348661497000874,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-28T23:57:41.185036167Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&PodSandboxMetadata{Name:kube-proxy-pwbfs,Uid:5303f947-6c3f-47b5-b396-33b92049d48f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714348659239640588,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-28T23:57:38.913288702Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&PodSandboxMetadata{Name:etcd-ha-274394,Uid:2ada5cad8658d509e973050889a81f40,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714348639701826383,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.237:2379,kubernetes.io/config.hash: 2ada5cad8658d509e973050889a81f40,kubernetes.io/config.seen: 2024-04-28T23:57:19.227903027Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-274394,Uid:d2454bba76a07a5ac0349d2285d97e46,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714348639694572
568,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d2454bba76a07a5ac0349d2285d97e46,kubernetes.io/config.seen: 2024-04-28T23:57:19.227992444Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4f3998c8-a8d2-461f-a1f5-abb66ce160bd name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.986811989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5aaade09-2e9e-4d43-b02c-a68b394b1472 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.987041962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5aaade09-2e9e-4d43-b02c-a68b394b1472 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:11:03 ha-274394 crio[3921]: time="2024-04-29 00:11:03.987489917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb55e9c6d522d396d155ef1215247b959f12655839d45b9f564a878032f33c2f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714349408190899595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a7d4dbe869ca1caad7d20343cff3f78d02cdcb4175e5d816d03039baa9c0fa,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714349367205409094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b7729fd4b49c715b4212dd3334d99f7f4415b91a6e0ad04921eae5d66e2b84,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714349354194822067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d9114d32187eec17e5566f35807ed9bd3cc982b8cfe0c389bf72af6ef6679e,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714349351190899107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c0243bf3189cc2b4d6f357927410147bcabb14e3ca640327ff4909ec5d3814f,PodSandboxId:c39536af7def5fde4f18905cb572ec9d55b6bd50b254affafe0adcc82fb84a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714349346815077655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90230580bb8966b7fadfe92ba2a2195539fb6f674e409ec35f0dd02caefbf3bd,PodSandboxId:f72b9f6f6c6571f796d2f7e6082ee8d09f14b4e5d3c2668410288857008b3e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714349327340739238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b390e5e039b165a1793386b9ae3070,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:8b48a4004872d042c17a9da9d3e7497ebe9189415f3b97d651548e9f13d34c93,PodSandboxId:a9733b733641b82e35160f8b58f159969e4643b9a913bb0611d50ca82f550bc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714349313770969390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:0503917a1377777577015db4d0f48982e9b923c054e7727b257b6a6393c065f9,PodSandboxId:c29249a598bab8b65fe595d7748941377b5ed5da05e40d06fb7a192fcda58554,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313835630031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0fee281fb306c003ec9a71f9157ad69f8109a929efb701c3cd0ef9ee13c8ed,PodSandboxId:fbde7716d2c883035f5a1a77f8a386da7c83f981e74e07553f99b894451030b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313671699171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7fcfc456098f3763f49107505a52c0b80da11b3e9ee44354ed1edd20c7d5aed,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714349313659367759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714349313316428908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b573af7fe461ed3d8be8b298f3a913f7feda077f922877ea319297042d060e06,PodSandboxId:e244f4be4872c53f75efd9d9faadfd20cfbce91c6db7d0406fe33f4cfd429534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714349313516599131,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e97305
0889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a413dc9a5467e299b2594817dbaa37417dcd420f092104ce5e713101001ee224,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714349313460341969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a
88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5697620f655f6994310596c760aac93c16f112f25bd6c63bba0f603ccfe2983a,PodSandboxId:ae92a5ea253f1670e63f4a78d88b6f655ef42b85d459cf32473db6409d9ab5a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714349313372516151,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Ann
otations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714349307274996244,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kuber
netes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714348823628735354,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kuberne
tes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661894139184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661892853715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714348659691787871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1714348640048415540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedA
t:1714348639992830886,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5aaade09-2e9e-4d43-b02c-a68b394b1472 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:11:04 ha-274394 crio[3921]: time="2024-04-29 00:11:04.035813416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64168300-c759-4521-86c4-f4779cfbdef5 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:11:04 ha-274394 crio[3921]: time="2024-04-29 00:11:04.036024665Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64168300-c759-4521-86c4-f4779cfbdef5 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:11:04 ha-274394 crio[3921]: time="2024-04-29 00:11:04.037898381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=233a5c29-6958-4796-810a-bb5b01d5209e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:11:04 ha-274394 crio[3921]: time="2024-04-29 00:11:04.038557723Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349464038529883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=233a5c29-6958-4796-810a-bb5b01d5209e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:11:04 ha-274394 crio[3921]: time="2024-04-29 00:11:04.039747381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94fcf250-8928-479e-ba55-3e1aaecaffb0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:11:04 ha-274394 crio[3921]: time="2024-04-29 00:11:04.039859917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94fcf250-8928-479e-ba55-3e1aaecaffb0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:11:04 ha-274394 crio[3921]: time="2024-04-29 00:11:04.041233138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb55e9c6d522d396d155ef1215247b959f12655839d45b9f564a878032f33c2f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714349408190899595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a7d4dbe869ca1caad7d20343cff3f78d02cdcb4175e5d816d03039baa9c0fa,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714349367205409094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b7729fd4b49c715b4212dd3334d99f7f4415b91a6e0ad04921eae5d66e2b84,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714349354194822067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d9114d32187eec17e5566f35807ed9bd3cc982b8cfe0c389bf72af6ef6679e,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714349351190899107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c0243bf3189cc2b4d6f357927410147bcabb14e3ca640327ff4909ec5d3814f,PodSandboxId:c39536af7def5fde4f18905cb572ec9d55b6bd50b254affafe0adcc82fb84a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714349346815077655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90230580bb8966b7fadfe92ba2a2195539fb6f674e409ec35f0dd02caefbf3bd,PodSandboxId:f72b9f6f6c6571f796d2f7e6082ee8d09f14b4e5d3c2668410288857008b3e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714349327340739238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b390e5e039b165a1793386b9ae3070,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:8b48a4004872d042c17a9da9d3e7497ebe9189415f3b97d651548e9f13d34c93,PodSandboxId:a9733b733641b82e35160f8b58f159969e4643b9a913bb0611d50ca82f550bc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714349313770969390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:0503917a1377777577015db4d0f48982e9b923c054e7727b257b6a6393c065f9,PodSandboxId:c29249a598bab8b65fe595d7748941377b5ed5da05e40d06fb7a192fcda58554,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313835630031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0fee281fb306c003ec9a71f9157ad69f8109a929efb701c3cd0ef9ee13c8ed,PodSandboxId:fbde7716d2c883035f5a1a77f8a386da7c83f981e74e07553f99b894451030b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313671699171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7fcfc456098f3763f49107505a52c0b80da11b3e9ee44354ed1edd20c7d5aed,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714349313659367759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714349313316428908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b573af7fe461ed3d8be8b298f3a913f7feda077f922877ea319297042d060e06,PodSandboxId:e244f4be4872c53f75efd9d9faadfd20cfbce91c6db7d0406fe33f4cfd429534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714349313516599131,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e97305
0889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a413dc9a5467e299b2594817dbaa37417dcd420f092104ce5e713101001ee224,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714349313460341969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a
88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5697620f655f6994310596c760aac93c16f112f25bd6c63bba0f603ccfe2983a,PodSandboxId:ae92a5ea253f1670e63f4a78d88b6f655ef42b85d459cf32473db6409d9ab5a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714349313372516151,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Ann
otations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714349307274996244,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kuber
netes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714348823628735354,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kuberne
tes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661894139184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661892853715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714348659691787871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1714348640048415540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedA
t:1714348639992830886,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94fcf250-8928-479e-ba55-3e1aaecaffb0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bb55e9c6d522d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      55 seconds ago       Running             storage-provisioner       5                   4606d15628a33       storage-provisioner
	b6a7d4dbe869c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   ae51d2fad4a66       kindnet-p6qmw
	d4b7729fd4b49       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            3                   8b3ab43b76165       kube-apiserver-ha-274394
	35d9114d32187       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   2                   72a5aebc54111       kube-controller-manager-ha-274394
	3c0243bf3189c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   c39536af7def5       busybox-fc5497c4f-wwl6p
	90230580bb896       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   f72b9f6f6c657       kube-vip-ha-274394
	0503917a13777       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   c29249a598bab       coredns-7db6d8ff4d-rslhx
	8b48a4004872d       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      2 minutes ago        Running             kube-proxy                1                   a9733b733641b       kube-proxy-pwbfs
	8c0fee281fb30       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   fbde7716d2c88       coredns-7db6d8ff4d-xkdcv
	b7fcfc456098f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      2 minutes ago        Exited              kube-apiserver            2                   8b3ab43b76165       kube-apiserver-ha-274394
	b573af7fe461e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   e244f4be4872c       etcd-ha-274394
	a413dc9a5467e       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      2 minutes ago        Exited              kube-controller-manager   1                   72a5aebc54111       kube-controller-manager-ha-274394
	5697620f655f6       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      2 minutes ago        Running             kube-scheduler            1                   ae92a5ea253f1       kube-scheduler-ha-274394
	95153ebb81f24       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       4                   4606d15628a33       storage-provisioner
	75b0b6d5d9883       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   ae51d2fad4a66       kindnet-p6qmw
	6191db59237ab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   7dc34422a092b       busybox-fc5497c4f-wwl6p
	39cef99138b5e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   86b45c3768b5c       coredns-7db6d8ff4d-rslhx
	4b75dd2cf8167       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   0a16b0222b334       coredns-7db6d8ff4d-xkdcv
	10c90fba42aa7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago       Exited              kube-proxy                0                   fe59c57afd7dc       kube-proxy-pwbfs
	a2665b4434106       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   9792afe7047da       etcd-ha-274394
	cd7d63b0cf58d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      13 minutes ago       Exited              kube-scheduler            0                   fb9c09a8e5609       kube-scheduler-ha-274394
	
	
	==> coredns [0503917a1377777577015db4d0f48982e9b923c054e7727b257b6a6393c065f9] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1206273367]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 00:08:40.785) (total time: 10001ms):
	Trace[1206273367]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:08:50.787)
	Trace[1206273367]: [10.001745644s] [10.001745644s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e] <==
	[INFO] 10.244.1.2:60722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098874s
	[INFO] 10.244.0.4:48543 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092957s
	[INFO] 10.244.0.4:57804 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001823584s
	[INFO] 10.244.0.4:33350 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106647s
	[INFO] 10.244.0.4:39835 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220436s
	[INFO] 10.244.0.4:34474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060725s
	[INFO] 10.244.0.4:42677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076278s
	[INFO] 10.244.2.2:41566 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146322s
	[INFO] 10.244.2.2:39633 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160447s
	[INFO] 10.244.2.2:36533 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123881s
	[INFO] 10.244.1.2:54710 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162932s
	[INFO] 10.244.1.2:59010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096219s
	[INFO] 10.244.1.2:39468 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158565s
	[INFO] 10.244.0.4:45378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179168s
	[INFO] 10.244.0.4:52678 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091044s
	[INFO] 10.244.2.2:46078 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195018s
	[INFO] 10.244.2.2:47504 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268349s
	[INFO] 10.244.1.2:34168 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000161101s
	[INFO] 10.244.0.4:52891 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148878s
	[INFO] 10.244.0.4:43079 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155917s
	[INFO] 10.244.0.4:46898 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114218s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1856&timeout=8m13s&timeoutSeconds=493&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1867&timeout=9m35s&timeoutSeconds=575&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a] <==
	[INFO] 10.244.0.4:54740 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000078827s
	[INFO] 10.244.0.4:52614 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00194917s
	[INFO] 10.244.2.2:33162 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162402s
	[INFO] 10.244.2.2:57592 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.023066556s
	[INFO] 10.244.2.2:57043 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000235049s
	[INFO] 10.244.1.2:47075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014599s
	[INFO] 10.244.1.2:60870 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002072779s
	[INFO] 10.244.1.2:46861 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094825s
	[INFO] 10.244.1.2:46908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186676s
	[INFO] 10.244.0.4:60188 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001709235s
	[INFO] 10.244.0.4:43834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109382s
	[INFO] 10.244.2.2:42186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000296079s
	[INFO] 10.244.1.2:44715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184251s
	[INFO] 10.244.0.4:45543 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116414s
	[INFO] 10.244.0.4:47556 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083226s
	[INFO] 10.244.2.2:59579 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198403s
	[INFO] 10.244.2.2:42196 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000278968s
	[INFO] 10.244.1.2:34121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222019s
	[INFO] 10.244.1.2:54334 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016838s
	[INFO] 10.244.1.2:37434 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099473s
	[INFO] 10.244.0.4:58711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000413259s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1856&timeout=5m55s&timeoutSeconds=355&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1854&timeout=9m2s&timeoutSeconds=542&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [8c0fee281fb306c003ec9a71f9157ad69f8109a929efb701c3cd0ef9ee13c8ed] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55588->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[999540786]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 00:08:45.833) (total time: 10924ms):
	Trace[999540786]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55588->10.96.0.1:443: read: connection reset by peer 10924ms (00:08:56.758)
	Trace[999540786]: [10.924375289s] [10.924375289s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55588->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-274394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T23_57_27_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:57:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:11:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:09:17 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:09:17 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:09:17 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:09:17 +0000   Sun, 28 Apr 2024 23:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    ha-274394
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbc86a402e5548caa48d259a39be78de
	  System UUID:                bbc86a40-2e55-48ca-a48d-259a39be78de
	  Boot ID:                    b8dfffb5-63e7-4c7e-8e52-3cf4873fed01
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwl6p              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-rslhx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-xkdcv             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-274394                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-p6qmw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-274394             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-274394    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-pwbfs                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-274394             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-274394                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 106s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x7 over 13m)      kubelet          Node ha-274394 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x6 over 13m)      kubelet          Node ha-274394 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x6 over 13m)      kubelet          Node ha-274394 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-274394 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-274394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-274394 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-274394 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Warning  ContainerGCFailed        2m38s (x2 over 3m38s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           100s                   node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal   RegisteredNode           96s                    node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal   RegisteredNode           30s                    node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	
	
	Name:               ha-274394-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T23_58_39_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:58:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:11:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:10:01 +0000   Mon, 29 Apr 2024 00:09:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:10:01 +0000   Mon, 29 Apr 2024 00:09:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:10:01 +0000   Mon, 29 Apr 2024 00:09:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:10:01 +0000   Mon, 29 Apr 2024 00:09:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-274394-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b55609ff590f4bdba17fff0e954879c9
	  System UUID:                b55609ff-590f-4bdb-a17f-ff0e954879c9
	  Boot ID:                    54f50319-7460-41a7-a5f8-ad51d6817779
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tmk6v                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-274394-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-6qf7q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-274394-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-274394-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-g95c9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-274394-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-274394-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 86s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-274394-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-274394-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-274394-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  NodeNotReady             8m55s                  node-controller  Node ha-274394-m02 status is now: NodeNotReady
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node ha-274394-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node ha-274394-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node ha-274394-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           100s                   node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           96s                    node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           30s                    node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	
	
	Name:               ha-274394-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T23_59_58_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:59:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:10:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:10:35 +0000   Mon, 29 Apr 2024 00:10:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:10:35 +0000   Mon, 29 Apr 2024 00:10:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:10:35 +0000   Mon, 29 Apr 2024 00:10:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:10:35 +0000   Mon, 29 Apr 2024 00:10:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-274394-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d93714f0c10b4313b4406039da06a844
	  System UUID:                d93714f0-c10b-4313-b440-6039da06a844
	  Boot ID:                    426d7da8-2be1-4732-a524-2b744f3416f4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kjcqn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-274394-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-29qlf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-274394-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-274394-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-4rb7k                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-274394-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-274394-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 40s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-274394-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-274394-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-274394-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	  Normal   RegisteredNode           100s               node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	  Normal   NodeNotReady             60s                node-controller  Node ha-274394-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s (x3 over 59s)  kubelet          Node ha-274394-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x3 over 59s)  kubelet          Node ha-274394-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x3 over 59s)  kubelet          Node ha-274394-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 59s (x2 over 59s)  kubelet          Node ha-274394-m03 has been rebooted, boot id: 426d7da8-2be1-4732-a524-2b744f3416f4
	  Normal   NodeReady                59s (x2 over 59s)  kubelet          Node ha-274394-m03 status is now: NodeReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-274394-m03 event: Registered Node ha-274394-m03 in Controller
	
	
	Name:               ha-274394-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T00_00_59_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:00:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:10:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:10:55 +0000   Mon, 29 Apr 2024 00:10:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:10:55 +0000   Mon, 29 Apr 2024 00:10:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:10:55 +0000   Mon, 29 Apr 2024 00:10:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:10:55 +0000   Mon, 29 Apr 2024 00:10:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    ha-274394-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eda4c6845a404536baab34c56e482672
	  System UUID:                eda4c684-5a40-4536-baab-34c56e482672
	  Boot ID:                    bbb756fc-2b7d-430c-ac26-49a753bf4a63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-r7wp2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-4h24n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 9m59s              kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-274394-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-274394-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-274394-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   NodeReady                9m54s              kubelet          Node ha-274394-m04 status is now: NodeReady
	  Normal   RegisteredNode           100s               node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   NodeNotReady             60s                node-controller  Node ha-274394-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-274394-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-274394-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-274394-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-274394-m04 has been rebooted, boot id: bbb756fc-2b7d-430c-ac26-49a753bf4a63
	  Normal   NodeReady                9s                 kubelet          Node ha-274394-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.062174] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072067] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.188727] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.118445] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.277590] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +5.051195] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.066175] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.782579] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.939635] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.597447] systemd-fstab-generator[1372]: Ignoring "noauto" option for root device
	[  +0.110049] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.496389] kauditd_printk_skb: 21 callbacks suppressed
	[Apr28 23:58] kauditd_printk_skb: 74 callbacks suppressed
	[Apr29 00:05] kauditd_printk_skb: 1 callbacks suppressed
	[Apr29 00:08] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[  +0.155395] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +0.189653] systemd-fstab-generator[3862]: Ignoring "noauto" option for root device
	[  +0.159042] systemd-fstab-generator[3875]: Ignoring "noauto" option for root device
	[  +0.310162] systemd-fstab-generator[3903]: Ignoring "noauto" option for root device
	[  +1.285716] systemd-fstab-generator[4012]: Ignoring "noauto" option for root device
	[  +5.935878] kauditd_printk_skb: 132 callbacks suppressed
	[ +10.390575] kauditd_printk_skb: 87 callbacks suppressed
	[ +12.102672] kauditd_printk_skb: 2 callbacks suppressed
	[Apr29 00:09] kauditd_printk_skb: 5 callbacks suppressed
	[ +17.302728] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6] <==
	{"level":"warn","ts":"2024-04-29T00:06:53.439485Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:06:44.82249Z","time spent":"8.616986499s","remote":"127.0.0.1:36576","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:500 "}
	2024/04/29 00:06:53 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-29T00:06:53.429449Z","caller":"traceutil/trace.go:171","msg":"trace[2112290400] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"1.148304313s","start":"2024-04-29T00:06:52.281137Z","end":"2024-04-29T00:06:53.429442Z","steps":["trace[2112290400] 'agreement among raft nodes before linearized reading'  (duration: 1.128974947s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:06:53.439558Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:06:52.281044Z","time spent":"1.158507029s","remote":"127.0.0.1:36494","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	2024/04/29 00:06:53 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-29T00:06:53.494577Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.237:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:06:53.494643Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.237:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T00:06:53.496045Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3f0f97df8a50e0be","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-29T00:06:53.496263Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496339Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496406Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496601Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496682Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496716Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496726Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496732Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.49674Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.49679Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.496882Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.497037Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.497105Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.497166Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.500972Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.237:2380"}
	{"level":"info","ts":"2024-04-29T00:06:53.501111Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.237:2380"}
	{"level":"info","ts":"2024-04-29T00:06:53.501146Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-274394","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.237:2380"],"advertise-client-urls":["https://192.168.39.237:2379"]}
	
	
	==> etcd [b573af7fe461ed3d8be8b298f3a913f7feda077f922877ea319297042d060e06] <==
	{"level":"warn","ts":"2024-04-29T00:10:09.552605Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"76ea7d5cdc93362b","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-29T00:10:11.833764Z","caller":"traceutil/trace.go:171","msg":"trace[2046273254] transaction","detail":"{read_only:false; response_revision:2372; number_of_response:1; }","duration":"135.943408ms","start":"2024-04-29T00:10:11.697799Z","end":"2024-04-29T00:10:11.833742Z","steps":["trace[2046273254] 'process raft request'  (duration: 135.593898ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:10:12.26378Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"76ea7d5cdc93362b","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T00:10:12.2639Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"76ea7d5cdc93362b","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T00:10:14.553853Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"76ea7d5cdc93362b","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T00:10:14.554101Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"76ea7d5cdc93362b","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T00:10:16.269762Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"76ea7d5cdc93362b","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T00:10:16.270027Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"76ea7d5cdc93362b","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-29T00:10:16.400101Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:10:16.427212Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:10:16.446462Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:10:16.448652Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3f0f97df8a50e0be","to":"76ea7d5cdc93362b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-29T00:10:16.448735Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:10:16.451463Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3f0f97df8a50e0be","to":"76ea7d5cdc93362b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-29T00:10:16.451673Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:10:20.618728Z","caller":"traceutil/trace.go:171","msg":"trace[1237015245] transaction","detail":"{read_only:false; response_revision:2403; number_of_response:1; }","duration":"168.332896ms","start":"2024-04-29T00:10:20.45035Z","end":"2024-04-29T00:10:20.618683Z","steps":["trace[1237015245] 'process raft request'  (duration: 168.10933ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:10:20.620212Z","caller":"traceutil/trace.go:171","msg":"trace[1138952764] linearizableReadLoop","detail":"{readStateIndex:2793; appliedIndex:2794; }","duration":"125.639337ms","start":"2024-04-29T00:10:20.494548Z","end":"2024-04-29T00:10:20.620188Z","steps":["trace[1138952764] 'read index received'  (duration: 125.634686ms)","trace[1138952764] 'applied index is now lower than readState.Index'  (duration: 3.551µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:10:20.620532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.89127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-29T00:10:20.620609Z","caller":"traceutil/trace.go:171","msg":"trace[434455184] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:2403; }","duration":"126.079077ms","start":"2024-04-29T00:10:20.494515Z","end":"2024-04-29T00:10:20.620594Z","steps":["trace[434455184] 'agreement among raft nodes before linearized reading'  (duration: 125.808224ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:10:20.621284Z","caller":"traceutil/trace.go:171","msg":"trace[1628268303] transaction","detail":"{read_only:false; response_revision:2404; number_of_response:1; }","duration":"160.842218ms","start":"2024-04-29T00:10:20.46043Z","end":"2024-04-29T00:10:20.621272Z","steps":["trace[1628268303] 'process raft request'  (duration: 160.74606ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:10:29.267339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.758166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ha-274394-m03\" ","response":"range_response_count:1 size:7025"}
	{"level":"info","ts":"2024-04-29T00:10:29.267421Z","caller":"traceutil/trace.go:171","msg":"trace[1111362850] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ha-274394-m03; range_end:; response_count:1; response_revision:2446; }","duration":"116.866213ms","start":"2024-04-29T00:10:29.150534Z","end":"2024-04-29T00:10:29.2674Z","steps":["trace[1111362850] 'agreement among raft nodes before linearized reading'  (duration: 87.593329ms)","trace[1111362850] 'range keys from in-memory index tree'  (duration: 29.146552ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:10:29.267725Z","caller":"traceutil/trace.go:171","msg":"trace[1020126397] transaction","detail":"{read_only:false; response_revision:2447; number_of_response:1; }","duration":"117.365662ms","start":"2024-04-29T00:10:29.150349Z","end":"2024-04-29T00:10:29.267715Z","steps":["trace[1020126397] 'process raft request'  (duration: 87.22677ms)","trace[1020126397] 'compare'  (duration: 29.790081ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:10:59.044526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.860865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-4h24n\" ","response":"range_response_count:1 size:4997"}
	{"level":"info","ts":"2024-04-29T00:10:59.044746Z","caller":"traceutil/trace.go:171","msg":"trace[851947625] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-4h24n; range_end:; response_count:1; response_revision:2552; }","duration":"201.124342ms","start":"2024-04-29T00:10:58.843605Z","end":"2024-04-29T00:10:59.044729Z","steps":["trace[851947625] 'range keys from in-memory index tree'  (duration: 199.880435ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:11:04 up 14 min,  0 users,  load average: 0.92, 0.80, 0.43
	Linux ha-274394 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e] <==
	I0429 00:08:27.731354       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0429 00:08:28.124214       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0429 00:08:28.124504       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0429 00:08:32.181370       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 00:08:35.254210       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 00:08:38.326339       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [b6a7d4dbe869ca1caad7d20343cff3f78d02cdcb4175e5d816d03039baa9c0fa] <==
	I0429 00:10:28.526811       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:10:38.551161       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:10:38.551869       1 main.go:227] handling current node
	I0429 00:10:38.552408       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:10:38.552572       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:10:38.553688       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0429 00:10:38.554058       1 main.go:250] Node ha-274394-m03 has CIDR [10.244.2.0/24] 
	I0429 00:10:38.554864       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:10:38.555063       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:10:48.572489       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:10:48.572551       1 main.go:227] handling current node
	I0429 00:10:48.572567       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:10:48.572576       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:10:48.572725       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0429 00:10:48.572767       1 main.go:250] Node ha-274394-m03 has CIDR [10.244.2.0/24] 
	I0429 00:10:48.572851       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:10:48.572860       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:10:58.585093       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:10:58.585272       1 main.go:227] handling current node
	I0429 00:10:58.585333       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:10:58.585367       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:10:58.585599       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0429 00:10:58.585650       1 main.go:250] Node ha-274394-m03 has CIDR [10.244.2.0/24] 
	I0429 00:10:58.585759       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:10:58.585796       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b7fcfc456098f3763f49107505a52c0b80da11b3e9ee44354ed1edd20c7d5aed] <==
	I0429 00:08:34.362189       1 options.go:221] external host was not specified, using 192.168.39.237
	I0429 00:08:34.367138       1 server.go:148] Version: v1.30.0
	I0429 00:08:34.367281       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:35.082154       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0429 00:08:35.082233       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0429 00:08:35.082406       1 instance.go:299] Using reconciler: lease
	I0429 00:08:35.082858       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0429 00:08:35.083083       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0429 00:08:55.080166       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0429 00:08:55.080180       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0429 00:08:55.083604       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [d4b7729fd4b49c715b4212dd3334d99f7f4415b91a6e0ad04921eae5d66e2b84] <==
	I0429 00:09:16.079761       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0429 00:09:16.079776       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0429 00:09:16.114029       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 00:09:16.116185       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 00:09:16.116223       1 policy_source.go:224] refreshing policies
	I0429 00:09:16.152859       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 00:09:16.152956       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 00:09:16.153079       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 00:09:16.153172       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 00:09:16.153889       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 00:09:16.154045       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 00:09:16.154095       1 aggregator.go:165] initial CRD sync complete...
	I0429 00:09:16.154109       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 00:09:16.154139       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 00:09:16.154145       1 cache.go:39] Caches are synced for autoregister controller
	I0429 00:09:16.154971       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 00:09:16.160455       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0429 00:09:16.172743       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.250 192.168.39.43]
	I0429 00:09:16.174566       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 00:09:16.185602       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0429 00:09:16.190181       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0429 00:09:16.202716       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 00:09:17.063360       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0429 00:09:17.513621       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.237 192.168.39.250 192.168.39.43]
	W0429 00:09:27.514104       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.237 192.168.39.43]
	
	
	==> kube-controller-manager [35d9114d32187eec17e5566f35807ed9bd3cc982b8cfe0c389bf72af6ef6679e] <==
	I0429 00:09:28.834377       1 shared_informer.go:320] Caches are synced for stateful set
	I0429 00:09:28.839502       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0429 00:09:28.840619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="134.643µs"
	I0429 00:09:28.841004       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="156.726µs"
	I0429 00:09:28.841464       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 00:09:28.847700       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 00:09:29.235214       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 00:09:29.235262       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 00:09:29.301855       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 00:09:31.549258       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tnc4s EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tnc4s\": the object has been modified; please apply your changes to the latest version and try again"
	I0429 00:09:31.550155       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"b370ceb0-874e-4edc-8bf1-8a857f43f5d3", APIVersion:"v1", ResourceVersion:"268", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tnc4s EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tnc4s": the object has been modified; please apply your changes to the latest version and try again
	I0429 00:09:31.585792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.149196ms"
	I0429 00:09:31.586078       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="183.758µs"
	I0429 00:09:39.677284       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.145996ms"
	I0429 00:09:39.677390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.137µs"
	I0429 00:09:41.549304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.889127ms"
	I0429 00:09:41.550496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="257.696µs"
	I0429 00:09:41.553127       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tnc4s EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tnc4s\": the object has been modified; please apply your changes to the latest version and try again"
	I0429 00:09:41.553401       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"b370ceb0-874e-4edc-8bf1-8a857f43f5d3", APIVersion:"v1", ResourceVersion:"268", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tnc4s EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tnc4s": the object has been modified; please apply your changes to the latest version and try again
	I0429 00:10:04.617874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.551232ms"
	I0429 00:10:04.618445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.097µs"
	I0429 00:10:06.361275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.314µs"
	I0429 00:10:26.722960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.616433ms"
	I0429 00:10:26.723252       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="164.492µs"
	I0429 00:10:55.536281       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-274394-m04"
	
	
	==> kube-controller-manager [a413dc9a5467e299b2594817dbaa37417dcd420f092104ce5e713101001ee224] <==
	I0429 00:08:35.148046       1 serving.go:380] Generated self-signed cert in-memory
	I0429 00:08:35.809972       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0429 00:08:35.810030       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:35.811993       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 00:08:35.812139       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 00:08:35.812725       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 00:08:35.812805       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0429 00:08:56.091659       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.237:8443/healthz\": dial tcp 192.168.39.237:8443: connect: connection refused"
	
	
	==> kube-proxy [10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a] <==
	E0429 00:05:36.758010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:39.829319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:39.829412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:39.829604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:39.829670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:39.829892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:39.830076       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:45.975117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:45.975306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:45.975241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:45.975450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:45.975383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:45.975545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:58.261311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:58.261384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:06:01.334057       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:06:01.334517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:06:01.335100       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:06:01.335164       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:06:16.695150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:06:16.695472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:06:19.766738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:06:19.766972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:06:19.766768       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:06:19.767245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [8b48a4004872d042c17a9da9d3e7497ebe9189415f3b97d651548e9f13d34c93] <==
	I0429 00:08:35.583826       1 server_linux.go:69] "Using iptables proxy"
	E0429 00:08:38.006583       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 00:08:41.078493       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 00:08:44.149707       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 00:08:50.294133       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 00:08:59.511226       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 00:09:17.943528       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0429 00:09:17.943708       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0429 00:09:18.104166       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:09:18.105346       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:09:18.105472       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:09:18.128421       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:09:18.128649       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:09:18.128691       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:09:18.130853       1 config.go:192] "Starting service config controller"
	I0429 00:09:18.130973       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:09:18.131015       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:09:18.131019       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:09:18.132528       1 config.go:319] "Starting node config controller"
	I0429 00:09:18.132563       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:09:18.231053       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:09:18.231395       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:09:18.234614       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5697620f655f6994310596c760aac93c16f112f25bd6c63bba0f603ccfe2983a] <==
	W0429 00:09:11.145300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:11.145386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:11.442399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.237:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:11.442524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.237:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:11.652470       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.237:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:11.652691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.237:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:11.995417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.237:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:11.995505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.237:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:12.209458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.237:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:12.209574       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.237:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:12.277354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.237:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:12.277435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.237:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:12.569381       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:12.569501       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:12.849032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.237:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:12.849148       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.237:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:12.945562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.237:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:12.945712       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.237:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:13.138204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.237:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:13.138291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.237:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:13.549813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:13.549991       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:16.092545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:09:16.092981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0429 00:09:16.199563       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1] <==
	W0429 00:06:48.633181       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 00:06:48.633287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:06:48.755489       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 00:06:48.755552       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 00:06:48.792823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:06:48.793014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:06:48.859716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:06:48.859812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:06:49.347686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:06:49.347719       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:06:51.341388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:06:51.341552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:06:51.612108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:06:51.612224       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 00:06:51.681854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:06:51.682041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:06:52.676215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:06:52.676254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:06:52.733717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 00:06:52.733783       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:06:52.936896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 00:06:52.937008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 00:06:53.173884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 00:06:53.174089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 00:06:53.386241       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 00:09:17 ha-274394 kubelet[1379]: E0429 00:09:17.941330    1379 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-274394.17ca976b87299170\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-274394.17ca976b87299170  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-274394,UID:4efe96637929623fb8b0eb26a06bea4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-274394,},FirstTimestamp:2024-04-29 00:04:56.252838256 +0000 UTC m=+450.237650067,LastTimestamp:2024-04-29 00:05:00.263272255 +0000 UTC m=+454.248084058,Count:2,Type:Warning,EventTime:0001-01-01 0
0:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-274394,}"
	Apr 29 00:09:17 ha-274394 kubelet[1379]: I0429 00:09:17.941603    1379 status_manager.go:853] "Failed to get status for pod" podUID="b291d6ca-3a9b-4dd0-b0e9-a183347e7d26" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 29 00:09:17 ha-274394 kubelet[1379]: E0429 00:09:17.942191    1379 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-274394\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 29 00:09:25 ha-274394 kubelet[1379]: I0429 00:09:25.396749    1379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-wwl6p" podStartSLOduration=542.855941095 podStartE2EDuration="9m5.396716035s" podCreationTimestamp="2024-04-29 00:00:20 +0000 UTC" firstStartedPulling="2024-04-29 00:00:21.06085905 +0000 UTC m=+175.045670863" lastFinishedPulling="2024-04-29 00:00:23.601633987 +0000 UTC m=+177.586445803" observedRunningTime="2024-04-29 00:00:24.071475773 +0000 UTC m=+178.056287596" watchObservedRunningTime="2024-04-29 00:09:25.396716035 +0000 UTC m=+719.381527857"
	Apr 29 00:09:26 ha-274394 kubelet[1379]: E0429 00:09:26.209763    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:09:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:09:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:09:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:09:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:09:27 ha-274394 kubelet[1379]: I0429 00:09:27.175832    1379 scope.go:117] "RemoveContainer" containerID="75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e"
	Apr 29 00:09:27 ha-274394 kubelet[1379]: I0429 00:09:27.176689    1379 scope.go:117] "RemoveContainer" containerID="95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f"
	Apr 29 00:09:27 ha-274394 kubelet[1379]: E0429 00:09:27.176864    1379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b291d6ca-3a9b-4dd0-b0e9-a183347e7d26)\"" pod="kube-system/storage-provisioner" podUID="b291d6ca-3a9b-4dd0-b0e9-a183347e7d26"
	Apr 29 00:09:42 ha-274394 kubelet[1379]: I0429 00:09:42.176531    1379 scope.go:117] "RemoveContainer" containerID="95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f"
	Apr 29 00:09:42 ha-274394 kubelet[1379]: E0429 00:09:42.176743    1379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b291d6ca-3a9b-4dd0-b0e9-a183347e7d26)\"" pod="kube-system/storage-provisioner" podUID="b291d6ca-3a9b-4dd0-b0e9-a183347e7d26"
	Apr 29 00:09:57 ha-274394 kubelet[1379]: I0429 00:09:57.176026    1379 scope.go:117] "RemoveContainer" containerID="95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f"
	Apr 29 00:09:57 ha-274394 kubelet[1379]: E0429 00:09:57.176295    1379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b291d6ca-3a9b-4dd0-b0e9-a183347e7d26)\"" pod="kube-system/storage-provisioner" podUID="b291d6ca-3a9b-4dd0-b0e9-a183347e7d26"
	Apr 29 00:10:08 ha-274394 kubelet[1379]: I0429 00:10:08.177474    1379 scope.go:117] "RemoveContainer" containerID="95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f"
	Apr 29 00:10:18 ha-274394 kubelet[1379]: I0429 00:10:18.176493    1379 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-274394" podUID="ce6151de-754a-4f15-94d4-71f4fb9cbd21"
	Apr 29 00:10:18 ha-274394 kubelet[1379]: I0429 00:10:18.211458    1379 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-274394"
	Apr 29 00:10:26 ha-274394 kubelet[1379]: I0429 00:10:26.199902    1379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-274394" podStartSLOduration=8.199859389 podStartE2EDuration="8.199859389s" podCreationTimestamp="2024-04-29 00:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 00:10:26.199497465 +0000 UTC m=+780.184309288" watchObservedRunningTime="2024-04-29 00:10:26.199859389 +0000 UTC m=+780.184671213"
	Apr 29 00:10:26 ha-274394 kubelet[1379]: E0429 00:10:26.208453    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:10:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:10:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:10:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:10:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 00:11:03.468294   43972 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17977-13393/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-274394 -n ha-274394
helpers_test.go:261: (dbg) Run:  kubectl --context ha-274394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (376.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274394 stop -v=7 --alsologtostderr: exit status 82 (2m0.496261704s)

                                                
                                                
-- stdout --
	* Stopping node "ha-274394-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:11:24.007490   44378 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:11:24.007619   44378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:11:24.007625   44378 out.go:304] Setting ErrFile to fd 2...
	I0429 00:11:24.007631   44378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:11:24.008116   44378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:11:24.008404   44378 out.go:298] Setting JSON to false
	I0429 00:11:24.008496   44378 mustload.go:65] Loading cluster: ha-274394
	I0429 00:11:24.008855   44378 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:11:24.008954   44378 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0429 00:11:24.009147   44378 mustload.go:65] Loading cluster: ha-274394
	I0429 00:11:24.009308   44378 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:11:24.009349   44378 stop.go:39] StopHost: ha-274394-m04
	I0429 00:11:24.009754   44378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:11:24.009800   44378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:11:24.027825   44378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33499
	I0429 00:11:24.028430   44378 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:11:24.029187   44378 main.go:141] libmachine: Using API Version  1
	I0429 00:11:24.029226   44378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:11:24.029599   44378 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:11:24.032157   44378 out.go:177] * Stopping node "ha-274394-m04"  ...
	I0429 00:11:24.033644   44378 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 00:11:24.033679   44378 main.go:141] libmachine: (ha-274394-m04) Calling .DriverName
	I0429 00:11:24.033920   44378 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 00:11:24.033943   44378 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHHostname
	I0429 00:11:24.037356   44378 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:11:24.037859   44378 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:10:48 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:11:24.037916   44378 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:11:24.038074   44378 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHPort
	I0429 00:11:24.038261   44378 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHKeyPath
	I0429 00:11:24.038452   44378 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHUsername
	I0429 00:11:24.038633   44378 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m04/id_rsa Username:docker}
	I0429 00:11:24.127759   44378 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 00:11:24.184536   44378 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 00:11:24.240551   44378 main.go:141] libmachine: Stopping "ha-274394-m04"...
	I0429 00:11:24.240580   44378 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:11:24.242347   44378 main.go:141] libmachine: (ha-274394-m04) Calling .Stop
	I0429 00:11:24.246050   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 0/120
	I0429 00:11:25.247435   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 1/120
	I0429 00:11:26.248919   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 2/120
	I0429 00:11:27.250791   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 3/120
	I0429 00:11:28.252648   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 4/120
	I0429 00:11:29.254727   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 5/120
	I0429 00:11:30.256587   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 6/120
	I0429 00:11:31.257879   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 7/120
	I0429 00:11:32.259484   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 8/120
	I0429 00:11:33.260925   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 9/120
	I0429 00:11:34.262666   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 10/120
	I0429 00:11:35.264858   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 11/120
	I0429 00:11:36.266717   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 12/120
	I0429 00:11:37.268540   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 13/120
	I0429 00:11:38.269934   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 14/120
	I0429 00:11:39.271625   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 15/120
	I0429 00:11:40.273018   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 16/120
	I0429 00:11:41.274324   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 17/120
	I0429 00:11:42.276538   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 18/120
	I0429 00:11:43.278901   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 19/120
	I0429 00:11:44.280634   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 20/120
	I0429 00:11:45.282231   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 21/120
	I0429 00:11:46.283634   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 22/120
	I0429 00:11:47.284958   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 23/120
	I0429 00:11:48.286561   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 24/120
	I0429 00:11:49.288715   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 25/120
	I0429 00:11:50.290033   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 26/120
	I0429 00:11:51.291221   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 27/120
	I0429 00:11:52.292496   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 28/120
	I0429 00:11:53.293786   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 29/120
	I0429 00:11:54.295415   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 30/120
	I0429 00:11:55.297520   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 31/120
	I0429 00:11:56.299250   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 32/120
	I0429 00:11:57.300907   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 33/120
	I0429 00:11:58.303047   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 34/120
	I0429 00:11:59.304797   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 35/120
	I0429 00:12:00.306928   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 36/120
	I0429 00:12:01.308525   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 37/120
	I0429 00:12:02.310298   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 38/120
	I0429 00:12:03.311551   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 39/120
	I0429 00:12:04.313558   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 40/120
	I0429 00:12:05.314779   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 41/120
	I0429 00:12:06.316275   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 42/120
	I0429 00:12:07.317694   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 43/120
	I0429 00:12:08.319238   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 44/120
	I0429 00:12:09.320962   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 45/120
	I0429 00:12:10.322332   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 46/120
	I0429 00:12:11.324520   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 47/120
	I0429 00:12:12.325768   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 48/120
	I0429 00:12:13.327339   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 49/120
	I0429 00:12:14.329752   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 50/120
	I0429 00:12:15.332023   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 51/120
	I0429 00:12:16.333235   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 52/120
	I0429 00:12:17.334740   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 53/120
	I0429 00:12:18.336287   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 54/120
	I0429 00:12:19.338224   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 55/120
	I0429 00:12:20.339679   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 56/120
	I0429 00:12:21.341202   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 57/120
	I0429 00:12:22.342587   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 58/120
	I0429 00:12:23.343933   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 59/120
	I0429 00:12:24.345312   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 60/120
	I0429 00:12:25.346645   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 61/120
	I0429 00:12:26.348008   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 62/120
	I0429 00:12:27.349444   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 63/120
	I0429 00:12:28.350701   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 64/120
	I0429 00:12:29.352724   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 65/120
	I0429 00:12:30.354856   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 66/120
	I0429 00:12:31.356236   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 67/120
	I0429 00:12:32.357662   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 68/120
	I0429 00:12:33.359062   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 69/120
	I0429 00:12:34.361439   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 70/120
	I0429 00:12:35.363010   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 71/120
	I0429 00:12:36.364533   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 72/120
	I0429 00:12:37.365891   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 73/120
	I0429 00:12:38.367219   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 74/120
	I0429 00:12:39.368677   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 75/120
	I0429 00:12:40.370444   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 76/120
	I0429 00:12:41.372424   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 77/120
	I0429 00:12:42.373704   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 78/120
	I0429 00:12:43.375090   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 79/120
	I0429 00:12:44.377302   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 80/120
	I0429 00:12:45.378791   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 81/120
	I0429 00:12:46.380451   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 82/120
	I0429 00:12:47.381899   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 83/120
	I0429 00:12:48.383194   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 84/120
	I0429 00:12:49.384911   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 85/120
	I0429 00:12:50.386339   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 86/120
	I0429 00:12:51.387581   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 87/120
	I0429 00:12:52.388986   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 88/120
	I0429 00:12:53.390421   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 89/120
	I0429 00:12:54.392455   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 90/120
	I0429 00:12:55.393735   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 91/120
	I0429 00:12:56.395384   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 92/120
	I0429 00:12:57.396933   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 93/120
	I0429 00:12:58.399510   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 94/120
	I0429 00:12:59.401335   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 95/120
	I0429 00:13:00.402935   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 96/120
	I0429 00:13:01.404522   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 97/120
	I0429 00:13:02.405916   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 98/120
	I0429 00:13:03.408002   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 99/120
	I0429 00:13:04.410208   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 100/120
	I0429 00:13:05.411485   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 101/120
	I0429 00:13:06.412876   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 102/120
	I0429 00:13:07.414286   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 103/120
	I0429 00:13:08.416454   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 104/120
	I0429 00:13:09.417993   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 105/120
	I0429 00:13:10.420099   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 106/120
	I0429 00:13:11.422059   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 107/120
	I0429 00:13:12.423447   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 108/120
	I0429 00:13:13.425298   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 109/120
	I0429 00:13:14.427561   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 110/120
	I0429 00:13:15.428686   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 111/120
	I0429 00:13:16.430088   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 112/120
	I0429 00:13:17.431586   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 113/120
	I0429 00:13:18.433141   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 114/120
	I0429 00:13:19.434952   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 115/120
	I0429 00:13:20.436469   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 116/120
	I0429 00:13:21.437880   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 117/120
	I0429 00:13:22.439130   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 118/120
	I0429 00:13:23.440560   44378 main.go:141] libmachine: (ha-274394-m04) Waiting for machine to stop 119/120
	I0429 00:13:24.441359   44378 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 00:13:24.441415   44378 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0429 00:13:24.443606   44378 out.go:177] 
	W0429 00:13:24.445155   44378 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0429 00:13:24.445169   44378 out.go:239] * 
	* 
	W0429 00:13:24.447373   44378 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 00:13:24.448801   44378 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-274394 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr: exit status 3 (18.980406454s)

                                                
                                                
-- stdout --
	ha-274394
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274394-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:13:24.505378   44813 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:13:24.505515   44813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:13:24.505526   44813 out.go:304] Setting ErrFile to fd 2...
	I0429 00:13:24.505530   44813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:13:24.505742   44813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:13:24.505948   44813 out.go:298] Setting JSON to false
	I0429 00:13:24.505973   44813 mustload.go:65] Loading cluster: ha-274394
	I0429 00:13:24.506078   44813 notify.go:220] Checking for updates...
	I0429 00:13:24.506451   44813 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:13:24.506469   44813 status.go:255] checking status of ha-274394 ...
	I0429 00:13:24.506920   44813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:13:24.506951   44813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:13:24.522872   44813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0429 00:13:24.523231   44813 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:13:24.523833   44813 main.go:141] libmachine: Using API Version  1
	I0429 00:13:24.523854   44813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:13:24.524333   44813 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:13:24.524592   44813 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0429 00:13:24.526372   44813 status.go:330] ha-274394 host status = "Running" (err=<nil>)
	I0429 00:13:24.526392   44813 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:13:24.526750   44813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:13:24.526827   44813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:13:24.542860   44813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I0429 00:13:24.543188   44813 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:13:24.543648   44813 main.go:141] libmachine: Using API Version  1
	I0429 00:13:24.543677   44813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:13:24.543985   44813 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:13:24.544161   44813 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:13:24.546805   44813 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:13:24.547212   44813 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:13:24.547232   44813 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:13:24.547354   44813 host.go:66] Checking if "ha-274394" exists ...
	I0429 00:13:24.547604   44813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:13:24.547645   44813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:13:24.562687   44813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0429 00:13:24.563101   44813 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:13:24.563523   44813 main.go:141] libmachine: Using API Version  1
	I0429 00:13:24.563542   44813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:13:24.563891   44813 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:13:24.564084   44813 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:13:24.564290   44813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:13:24.564339   44813 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:13:24.566988   44813 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:13:24.567405   44813 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:13:24.567429   44813 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:13:24.567584   44813 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:13:24.567761   44813 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:13:24.567920   44813 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:13:24.568108   44813 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:13:24.653933   44813 ssh_runner.go:195] Run: systemctl --version
	I0429 00:13:24.664612   44813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:13:24.694198   44813 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:13:24.694233   44813 api_server.go:166] Checking apiserver status ...
	I0429 00:13:24.694290   44813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:13:24.714706   44813 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5283/cgroup
	W0429 00:13:24.725856   44813 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5283/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:13:24.725931   44813 ssh_runner.go:195] Run: ls
	I0429 00:13:24.731007   44813 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:13:24.736118   44813 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:13:24.736138   44813 status.go:422] ha-274394 apiserver status = Running (err=<nil>)
	I0429 00:13:24.736147   44813 status.go:257] ha-274394 status: &{Name:ha-274394 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:13:24.736168   44813 status.go:255] checking status of ha-274394-m02 ...
	I0429 00:13:24.736869   44813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:13:24.736922   44813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:13:24.751986   44813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0429 00:13:24.752359   44813 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:13:24.752805   44813 main.go:141] libmachine: Using API Version  1
	I0429 00:13:24.752835   44813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:13:24.753184   44813 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:13:24.753370   44813 main.go:141] libmachine: (ha-274394-m02) Calling .GetState
	I0429 00:13:24.754819   44813 status.go:330] ha-274394-m02 host status = "Running" (err=<nil>)
	I0429 00:13:24.754837   44813 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:13:24.755129   44813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:13:24.755168   44813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:13:24.769303   44813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34377
	I0429 00:13:24.769637   44813 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:13:24.770096   44813 main.go:141] libmachine: Using API Version  1
	I0429 00:13:24.770121   44813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:13:24.770468   44813 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:13:24.770707   44813 main.go:141] libmachine: (ha-274394-m02) Calling .GetIP
	I0429 00:13:24.773466   44813 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:13:24.773898   44813 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:08:40 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:13:24.773940   44813 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:13:24.774092   44813 host.go:66] Checking if "ha-274394-m02" exists ...
	I0429 00:13:24.774349   44813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:13:24.774380   44813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:13:24.789837   44813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
	I0429 00:13:24.790194   44813 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:13:24.790508   44813 main.go:141] libmachine: Using API Version  1
	I0429 00:13:24.790531   44813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:13:24.790833   44813 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:13:24.791016   44813 main.go:141] libmachine: (ha-274394-m02) Calling .DriverName
	I0429 00:13:24.791168   44813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:13:24.791186   44813 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHHostname
	I0429 00:13:24.793706   44813 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:13:24.794133   44813 main.go:141] libmachine: (ha-274394-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ad:64", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:08:40 +0000 UTC Type:0 Mac:52:54:00:94:ad:64 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-274394-m02 Clientid:01:52:54:00:94:ad:64}
	I0429 00:13:24.794159   44813 main.go:141] libmachine: (ha-274394-m02) DBG | domain ha-274394-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:94:ad:64 in network mk-ha-274394
	I0429 00:13:24.794314   44813 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHPort
	I0429 00:13:24.794470   44813 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHKeyPath
	I0429 00:13:24.794621   44813 main.go:141] libmachine: (ha-274394-m02) Calling .GetSSHUsername
	I0429 00:13:24.794743   44813 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m02/id_rsa Username:docker}
	I0429 00:13:24.885023   44813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:13:24.908498   44813 kubeconfig.go:125] found "ha-274394" server: "https://192.168.39.254:8443"
	I0429 00:13:24.908528   44813 api_server.go:166] Checking apiserver status ...
	I0429 00:13:24.908567   44813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:13:24.931061   44813 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup
	W0429 00:13:24.947470   44813 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:13:24.947538   44813 ssh_runner.go:195] Run: ls
	I0429 00:13:24.953543   44813 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 00:13:24.958016   44813 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 00:13:24.958055   44813 status.go:422] ha-274394-m02 apiserver status = Running (err=<nil>)
	I0429 00:13:24.958064   44813 status.go:257] ha-274394-m02 status: &{Name:ha-274394-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:13:24.958082   44813 status.go:255] checking status of ha-274394-m04 ...
	I0429 00:13:24.958399   44813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:13:24.958435   44813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:13:24.972757   44813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I0429 00:13:24.973155   44813 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:13:24.973619   44813 main.go:141] libmachine: Using API Version  1
	I0429 00:13:24.973647   44813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:13:24.974004   44813 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:13:24.974213   44813 main.go:141] libmachine: (ha-274394-m04) Calling .GetState
	I0429 00:13:24.975759   44813 status.go:330] ha-274394-m04 host status = "Running" (err=<nil>)
	I0429 00:13:24.975777   44813 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:13:24.976081   44813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:13:24.976118   44813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:13:24.989832   44813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45117
	I0429 00:13:24.990208   44813 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:13:24.990655   44813 main.go:141] libmachine: Using API Version  1
	I0429 00:13:24.990679   44813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:13:24.991059   44813 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:13:24.991238   44813 main.go:141] libmachine: (ha-274394-m04) Calling .GetIP
	I0429 00:13:24.994035   44813 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:13:24.994487   44813 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:10:48 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:13:24.994519   44813 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:13:24.994684   44813 host.go:66] Checking if "ha-274394-m04" exists ...
	I0429 00:13:24.994944   44813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:13:24.994973   44813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:13:25.010400   44813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I0429 00:13:25.010832   44813 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:13:25.011295   44813 main.go:141] libmachine: Using API Version  1
	I0429 00:13:25.011325   44813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:13:25.011653   44813 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:13:25.011828   44813 main.go:141] libmachine: (ha-274394-m04) Calling .DriverName
	I0429 00:13:25.012003   44813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:13:25.012021   44813 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHHostname
	I0429 00:13:25.014419   44813 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:13:25.014828   44813 main.go:141] libmachine: (ha-274394-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:92:5b", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 01:10:48 +0000 UTC Type:0 Mac:52:54:00:65:92:5b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-274394-m04 Clientid:01:52:54:00:65:92:5b}
	I0429 00:13:25.014853   44813 main.go:141] libmachine: (ha-274394-m04) DBG | domain ha-274394-m04 has defined IP address 192.168.39.106 and MAC address 52:54:00:65:92:5b in network mk-ha-274394
	I0429 00:13:25.014991   44813 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHPort
	I0429 00:13:25.015145   44813 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHKeyPath
	I0429 00:13:25.015291   44813 main.go:141] libmachine: (ha-274394-m04) Calling .GetSSHUsername
	I0429 00:13:25.015426   44813 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394-m04/id_rsa Username:docker}
	W0429 00:13:43.430202   44813 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	W0429 00:13:43.430295   44813 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0429 00:13:43.430313   44813 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0429 00:13:43.430322   44813 status.go:257] ha-274394-m04 status: &{Name:ha-274394-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0429 00:13:43.430342   44813 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-274394 -n ha-274394
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-274394 logs -n 25: (1.92581089s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-274394 ssh -n ha-274394-m02 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m03_ha-274394-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04:/home/docker/cp-test_ha-274394-m03_ha-274394-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m04 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m03_ha-274394-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp testdata/cp-test.txt                                                | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3174175435/001/cp-test_ha-274394-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394:/home/docker/cp-test_ha-274394-m04_ha-274394.txt                       |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394 sudo cat                                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394.txt                                 |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m02:/home/docker/cp-test_ha-274394-m04_ha-274394-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m02 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m03:/home/docker/cp-test_ha-274394-m04_ha-274394-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n                                                                 | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | ha-274394-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-274394 ssh -n ha-274394-m03 sudo cat                                          | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC | 29 Apr 24 00:01 UTC |
	|         | /home/docker/cp-test_ha-274394-m04_ha-274394-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-274394 node stop m02 -v=7                                                     | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-274394 node start m02 -v=7                                                    | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-274394 -v=7                                                           | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-274394 -v=7                                                                | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-274394 --wait=true -v=7                                                    | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:06 UTC | 29 Apr 24 00:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-274394                                                                | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:11 UTC |                     |
	| node    | ha-274394 node delete m03 -v=7                                                   | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:11 UTC | 29 Apr 24 00:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-274394 stop -v=7                                                              | ha-274394 | jenkins | v1.33.0 | 29 Apr 24 00:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 00:06:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 00:06:52.197194   42604 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:06:52.197450   42604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:06:52.197459   42604 out.go:304] Setting ErrFile to fd 2...
	I0429 00:06:52.197463   42604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:06:52.197634   42604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:06:52.198179   42604 out.go:298] Setting JSON to false
	I0429 00:06:52.199037   42604 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6556,"bootTime":1714342656,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 00:06:52.199094   42604 start.go:139] virtualization: kvm guest
	I0429 00:06:52.201431   42604 out.go:177] * [ha-274394] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 00:06:52.203314   42604 out.go:177]   - MINIKUBE_LOCATION=17977
	I0429 00:06:52.203339   42604 notify.go:220] Checking for updates...
	I0429 00:06:52.204757   42604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 00:06:52.206208   42604 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0429 00:06:52.207668   42604 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:06:52.208956   42604 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 00:06:52.210108   42604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 00:06:52.211774   42604 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:06:52.211851   42604 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 00:06:52.212232   42604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:06:52.212266   42604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:06:52.229812   42604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38651
	I0429 00:06:52.230244   42604 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:06:52.230708   42604 main.go:141] libmachine: Using API Version  1
	I0429 00:06:52.230726   42604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:06:52.231066   42604 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:06:52.231247   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:06:52.265051   42604 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 00:06:52.266590   42604 start.go:297] selected driver: kvm2
	I0429 00:06:52.266609   42604 start.go:901] validating driver "kvm2" against &{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.106 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:06:52.266787   42604 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 00:06:52.267122   42604 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:06:52.267192   42604 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 00:06:52.281336   42604 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 00:06:52.282001   42604 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 00:06:52.282076   42604 cni.go:84] Creating CNI manager for ""
	I0429 00:06:52.282109   42604 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 00:06:52.282173   42604 start.go:340] cluster config:
	{Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.106 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:06:52.282296   42604 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:06:52.284123   42604 out.go:177] * Starting "ha-274394" primary control-plane node in "ha-274394" cluster
	I0429 00:06:52.285418   42604 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:06:52.285468   42604 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 00:06:52.285482   42604 cache.go:56] Caching tarball of preloaded images
	I0429 00:06:52.285578   42604 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 00:06:52.285594   42604 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 00:06:52.285755   42604 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/config.json ...
	I0429 00:06:52.286043   42604 start.go:360] acquireMachinesLock for ha-274394: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 00:06:52.286100   42604 start.go:364] duration metric: took 34.879µs to acquireMachinesLock for "ha-274394"
	I0429 00:06:52.286133   42604 start.go:96] Skipping create...Using existing machine configuration
	I0429 00:06:52.286144   42604 fix.go:54] fixHost starting: 
	I0429 00:06:52.286520   42604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:06:52.286562   42604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:06:52.300078   42604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42795
	I0429 00:06:52.300502   42604 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:06:52.300997   42604 main.go:141] libmachine: Using API Version  1
	I0429 00:06:52.301017   42604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:06:52.301343   42604 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:06:52.301531   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:06:52.301653   42604 main.go:141] libmachine: (ha-274394) Calling .GetState
	I0429 00:06:52.303375   42604 fix.go:112] recreateIfNeeded on ha-274394: state=Running err=<nil>
	W0429 00:06:52.303398   42604 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 00:06:52.305378   42604 out.go:177] * Updating the running kvm2 "ha-274394" VM ...
	I0429 00:06:52.306864   42604 machine.go:94] provisionDockerMachine start ...
	I0429 00:06:52.306886   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:06:52.307062   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:52.309317   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.309707   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:52.309731   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.309874   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:06:52.310064   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.310212   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.310350   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:06:52.310497   42604 main.go:141] libmachine: Using SSH client type: native
	I0429 00:06:52.310669   42604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0429 00:06:52.310679   42604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 00:06:52.421035   42604 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-274394
	
	I0429 00:06:52.421070   42604 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0429 00:06:52.421324   42604 buildroot.go:166] provisioning hostname "ha-274394"
	I0429 00:06:52.421344   42604 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0429 00:06:52.421521   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:52.424131   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.424521   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:52.424550   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.424675   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:06:52.424869   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.425043   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.425197   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:06:52.425346   42604 main.go:141] libmachine: Using SSH client type: native
	I0429 00:06:52.425501   42604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0429 00:06:52.425512   42604 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-274394 && echo "ha-274394" | sudo tee /etc/hostname
	I0429 00:06:52.554357   42604 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-274394
	
	I0429 00:06:52.554389   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:52.557098   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.557469   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:52.557498   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.557723   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:06:52.557903   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.558087   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:52.558218   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:06:52.558402   42604 main.go:141] libmachine: Using SSH client type: native
	I0429 00:06:52.558579   42604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0429 00:06:52.558597   42604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-274394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-274394/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-274394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 00:06:52.671785   42604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:06:52.671822   42604 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0429 00:06:52.671858   42604 buildroot.go:174] setting up certificates
	I0429 00:06:52.671869   42604 provision.go:84] configureAuth start
	I0429 00:06:52.671879   42604 main.go:141] libmachine: (ha-274394) Calling .GetMachineName
	I0429 00:06:52.672124   42604 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:06:52.674876   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.675279   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:52.675300   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.675516   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:52.677499   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.677878   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:52.677905   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:52.678069   42604 provision.go:143] copyHostCerts
	I0429 00:06:52.678115   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:06:52.678160   42604 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0429 00:06:52.678173   42604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:06:52.678263   42604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0429 00:06:52.678389   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:06:52.678420   42604 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0429 00:06:52.678431   42604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:06:52.678473   42604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0429 00:06:52.678551   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:06:52.678569   42604 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0429 00:06:52.678573   42604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:06:52.678596   42604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0429 00:06:52.678659   42604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.ha-274394 san=[127.0.0.1 192.168.39.237 ha-274394 localhost minikube]
	I0429 00:06:53.068566   42604 provision.go:177] copyRemoteCerts
	I0429 00:06:53.068629   42604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 00:06:53.068656   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:53.071443   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:53.071902   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:53.071929   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:53.072090   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:06:53.072302   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:53.072483   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:06:53.072652   42604 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:06:53.158996   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 00:06:53.159079   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 00:06:53.198083   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 00:06:53.198169   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0429 00:06:53.239032   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 00:06:53.239097   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 00:06:53.281113   42604 provision.go:87] duration metric: took 609.230569ms to configureAuth
	I0429 00:06:53.281144   42604 buildroot.go:189] setting minikube options for container-runtime
	I0429 00:06:53.281434   42604 config.go:182] Loaded profile config "ha-274394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:06:53.281522   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:06:53.284407   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:53.284880   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:06:53.284913   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:06:53.285091   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:06:53.285322   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:53.285503   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:06:53.285667   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:06:53.285839   42604 main.go:141] libmachine: Using SSH client type: native
	I0429 00:06:53.286079   42604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0429 00:06:53.286101   42604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 00:08:24.158742   42604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 00:08:24.158773   42604 machine.go:97] duration metric: took 1m31.851893107s to provisionDockerMachine
	I0429 00:08:24.158788   42604 start.go:293] postStartSetup for "ha-274394" (driver="kvm2")
	I0429 00:08:24.158805   42604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 00:08:24.158838   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.159184   42604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 00:08:24.159218   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:08:24.161934   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.162411   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.162454   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.162563   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:08:24.162746   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.162894   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:08:24.163019   42604 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:08:24.251014   42604 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 00:08:24.256642   42604 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 00:08:24.256670   42604 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0429 00:08:24.256753   42604 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0429 00:08:24.256837   42604 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0429 00:08:24.256849   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /etc/ssl/certs/207272.pem
	I0429 00:08:24.256934   42604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 00:08:24.267841   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:08:24.297386   42604 start.go:296] duration metric: took 138.583205ms for postStartSetup
	I0429 00:08:24.297435   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.297759   42604 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0429 00:08:24.297789   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:08:24.300119   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.300515   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.300542   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.300645   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:08:24.300816   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.300961   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:08:24.301108   42604 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	W0429 00:08:24.390728   42604 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0429 00:08:24.390750   42604 fix.go:56] duration metric: took 1m32.104607749s for fixHost
	I0429 00:08:24.390771   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:08:24.392977   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.393383   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.393416   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.393540   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:08:24.393724   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.393873   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.394041   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:08:24.394199   42604 main.go:141] libmachine: Using SSH client type: native
	I0429 00:08:24.394375   42604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0429 00:08:24.394385   42604 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 00:08:24.499552   42604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714349304.449748376
	
	I0429 00:08:24.499574   42604 fix.go:216] guest clock: 1714349304.449748376
	I0429 00:08:24.499583   42604 fix.go:229] Guest: 2024-04-29 00:08:24.449748376 +0000 UTC Remote: 2024-04-29 00:08:24.39075762 +0000 UTC m=+92.239872716 (delta=58.990756ms)
	I0429 00:08:24.499622   42604 fix.go:200] guest clock delta is within tolerance: 58.990756ms
	I0429 00:08:24.499635   42604 start.go:83] releasing machines lock for "ha-274394", held for 1m32.213510999s
	I0429 00:08:24.499653   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.499896   42604 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:08:24.502347   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.502681   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.502706   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.502909   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.503480   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.503689   42604 main.go:141] libmachine: (ha-274394) Calling .DriverName
	I0429 00:08:24.503771   42604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 00:08:24.503806   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:08:24.503915   42604 ssh_runner.go:195] Run: cat /version.json
	I0429 00:08:24.503935   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHHostname
	I0429 00:08:24.506801   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.506825   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.507196   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.507241   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.507269   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:24.507286   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:24.507317   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:08:24.507516   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.507521   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHPort
	I0429 00:08:24.507674   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHKeyPath
	I0429 00:08:24.507700   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:08:24.507822   42604 main.go:141] libmachine: (ha-274394) Calling .GetSSHUsername
	I0429 00:08:24.507823   42604 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:08:24.507967   42604 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/ha-274394/id_rsa Username:docker}
	I0429 00:08:24.609485   42604 ssh_runner.go:195] Run: systemctl --version
	I0429 00:08:24.616061   42604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 00:08:24.789399   42604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 00:08:24.797255   42604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 00:08:24.797340   42604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 00:08:24.807491   42604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 00:08:24.807521   42604 start.go:494] detecting cgroup driver to use...
	I0429 00:08:24.807586   42604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 00:08:24.825160   42604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 00:08:24.839957   42604 docker.go:217] disabling cri-docker service (if available) ...
	I0429 00:08:24.840003   42604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 00:08:24.854215   42604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 00:08:24.868170   42604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 00:08:25.030227   42604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 00:08:25.186975   42604 docker.go:233] disabling docker service ...
	I0429 00:08:25.187051   42604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 00:08:25.203862   42604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 00:08:25.218225   42604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 00:08:25.376978   42604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 00:08:25.535088   42604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 00:08:25.550812   42604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 00:08:25.571859   42604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 00:08:25.571907   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.583842   42604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 00:08:25.583898   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.595144   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.606128   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.617461   42604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 00:08:25.628739   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.640930   42604 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.654503   42604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:08:25.666026   42604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 00:08:25.676598   42604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 00:08:25.687486   42604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:08:25.846601   42604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 00:08:26.522677   42604 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 00:08:26.522736   42604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 00:08:26.528043   42604 start.go:562] Will wait 60s for crictl version
	I0429 00:08:26.528087   42604 ssh_runner.go:195] Run: which crictl
	I0429 00:08:26.532332   42604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 00:08:26.579797   42604 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 00:08:26.579862   42604 ssh_runner.go:195] Run: crio --version
	I0429 00:08:26.614566   42604 ssh_runner.go:195] Run: crio --version
	I0429 00:08:26.650706   42604 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 00:08:26.651948   42604 main.go:141] libmachine: (ha-274394) Calling .GetIP
	I0429 00:08:26.654818   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:26.655215   42604 main.go:141] libmachine: (ha-274394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:02:06", ip: ""} in network mk-ha-274394: {Iface:virbr1 ExpiryTime:2024-04-29 00:57:00 +0000 UTC Type:0 Mac:52:54:00:a1:02:06 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-274394 Clientid:01:52:54:00:a1:02:06}
	I0429 00:08:26.655252   42604 main.go:141] libmachine: (ha-274394) DBG | domain ha-274394 has defined IP address 192.168.39.237 and MAC address 52:54:00:a1:02:06 in network mk-ha-274394
	I0429 00:08:26.655534   42604 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 00:08:26.660587   42604 kubeadm.go:877] updating cluster {Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.106 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 00:08:26.660721   42604 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:08:26.660758   42604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:08:26.709663   42604 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:08:26.709683   42604 crio.go:433] Images already preloaded, skipping extraction
	I0429 00:08:26.709726   42604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:08:26.750223   42604 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:08:26.750243   42604 cache_images.go:84] Images are preloaded, skipping loading
	I0429 00:08:26.750251   42604 kubeadm.go:928] updating node { 192.168.39.237 8443 v1.30.0 crio true true} ...
	I0429 00:08:26.750349   42604 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-274394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 00:08:26.750407   42604 ssh_runner.go:195] Run: crio config
	I0429 00:08:26.804174   42604 cni.go:84] Creating CNI manager for ""
	I0429 00:08:26.804196   42604 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 00:08:26.804205   42604 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 00:08:26.804229   42604 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-274394 NodeName:ha-274394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 00:08:26.804419   42604 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-274394"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 00:08:26.804444   42604 kube-vip.go:111] generating kube-vip config ...
	I0429 00:08:26.804482   42604 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 00:08:26.818406   42604 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 00:08:26.818523   42604 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 00:08:26.818587   42604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 00:08:26.830511   42604 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 00:08:26.830584   42604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 00:08:26.841867   42604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0429 00:08:26.861401   42604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 00:08:26.881198   42604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0429 00:08:26.901552   42604 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 00:08:26.920501   42604 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 00:08:26.932794   42604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:08:27.146743   42604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 00:08:27.166944   42604 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394 for IP: 192.168.39.237
	I0429 00:08:27.166964   42604 certs.go:194] generating shared ca certs ...
	I0429 00:08:27.166985   42604 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:08:27.167129   42604 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0429 00:08:27.167178   42604 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0429 00:08:27.167191   42604 certs.go:256] generating profile certs ...
	I0429 00:08:27.167261   42604 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/client.key
	I0429 00:08:27.167286   42604 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.fb967c3e
	I0429 00:08:27.167296   42604 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.fb967c3e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237 192.168.39.43 192.168.39.250 192.168.39.254]
	I0429 00:08:27.279401   42604 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.fb967c3e ...
	I0429 00:08:27.279426   42604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.fb967c3e: {Name:mk1a57083afaac3908235246b81d4ca465b0a12f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:08:27.279607   42604 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.fb967c3e ...
	I0429 00:08:27.279622   42604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.fb967c3e: {Name:mk7360cee927f7f0e32d1159fbc68eac80a8e909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:08:27.279719   42604 certs.go:381] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt.fb967c3e -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt
	I0429 00:08:27.279858   42604 certs.go:385] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key.fb967c3e -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key
	I0429 00:08:27.279970   42604 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key
	I0429 00:08:27.279986   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 00:08:27.279998   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 00:08:27.280008   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 00:08:27.280021   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 00:08:27.280034   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 00:08:27.280043   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 00:08:27.280061   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 00:08:27.280073   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 00:08:27.280122   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0429 00:08:27.280152   42604 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0429 00:08:27.280161   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 00:08:27.280180   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0429 00:08:27.280200   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0429 00:08:27.280224   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0429 00:08:27.280262   42604 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:08:27.280287   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:08:27.280301   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem -> /usr/share/ca-certificates/20727.pem
	I0429 00:08:27.280313   42604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /usr/share/ca-certificates/207272.pem
	I0429 00:08:27.280911   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 00:08:27.342679   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 00:08:27.370138   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 00:08:27.401268   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 00:08:27.429048   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 00:08:27.456869   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 00:08:27.492941   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 00:08:27.520962   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/ha-274394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 00:08:27.547306   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 00:08:27.573710   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0429 00:08:27.628106   42604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0429 00:08:27.656098   42604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 00:08:27.680880   42604 ssh_runner.go:195] Run: openssl version
	I0429 00:08:27.688523   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 00:08:27.700994   42604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:08:27.706655   42604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:08:27.706707   42604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:08:27.713194   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 00:08:27.723967   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0429 00:08:27.737446   42604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0429 00:08:27.742959   42604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0429 00:08:27.743017   42604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0429 00:08:27.749953   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0429 00:08:27.761347   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0429 00:08:27.773738   42604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0429 00:08:27.781076   42604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0429 00:08:27.781129   42604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0429 00:08:27.788092   42604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 00:08:27.800347   42604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 00:08:27.805372   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 00:08:27.811683   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 00:08:27.818280   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 00:08:27.824765   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 00:08:27.831052   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 00:08:27.837321   42604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 00:08:27.843568   42604 kubeadm.go:391] StartCluster: {Name:ha-274394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-274394 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.106 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:08:27.843677   42604 cri.go:56] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 00:08:27.843723   42604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 00:08:27.893340   42604 cri.go:91] found id: "75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e"
	I0429 00:08:27.893370   42604 cri.go:91] found id: "ff0985c2cbc2faeb24fdbf451088ac783cea059c29266fd0634ff2631b9618a9"
	I0429 00:08:27.893376   42604 cri.go:91] found id: "774658ec1346c8dea1393ee857b30d7310ad67da3bfb33af7b0865061134263e"
	I0429 00:08:27.893381   42604 cri.go:91] found id: "8ec6505d955c2854cade67c18fbccd249cffceeae0c551bde8591ec4af4ca404"
	I0429 00:08:27.893385   42604 cri.go:91] found id: "b7f3af13cf11d4dfe1dca83c7ae580e606bd39ff5ca3aa2d712f7055006b40f5"
	I0429 00:08:27.893389   42604 cri.go:91] found id: "0bf681974a82a099157f031fd9f5b94ff7f7f4dab5438c9f3cfc78c297cd79c6"
	I0429 00:08:27.893394   42604 cri.go:91] found id: "39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e"
	I0429 00:08:27.893398   42604 cri.go:91] found id: "4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a"
	I0429 00:08:27.893419   42604 cri.go:91] found id: "10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a"
	I0429 00:08:27.893430   42604 cri.go:91] found id: "1144436f5b67a8616a5245d67f5f5000b19f39fd4aaa77c30a19d3feaf8eb036"
	I0429 00:08:27.893434   42604 cri.go:91] found id: "a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6"
	I0429 00:08:27.893438   42604 cri.go:91] found id: "cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1"
	I0429 00:08:27.893442   42604 cri.go:91] found id: "d4d50ed07ba2205bdc6b968a0f66deb63c9708a060e4079e707d79f62b78716f"
	I0429 00:08:27.893447   42604 cri.go:91] found id: "ec35813faf9fb16633b3058ea24f1d8ceeb9683c260758aa3e3ba9895ff7c6e9"
	I0429 00:08:27.893457   42604 cri.go:91] found id: ""
	I0429 00:08:27.893513   42604 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.168571197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b421750-2ba0-45b7-a390-724eba8aad88 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.172521645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41dddc86-7021-4166-a0ab-3086a026a772 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.173892110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349624173865528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41dddc86-7021-4166-a0ab-3086a026a772 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.174598730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab5a657a-685b-4d58-9655-d255a027d8df name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.174706690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab5a657a-685b-4d58-9655-d255a027d8df name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.176183579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb55e9c6d522d396d155ef1215247b959f12655839d45b9f564a878032f33c2f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714349408190899595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a7d4dbe869ca1caad7d20343cff3f78d02cdcb4175e5d816d03039baa9c0fa,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714349367205409094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b7729fd4b49c715b4212dd3334d99f7f4415b91a6e0ad04921eae5d66e2b84,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714349354194822067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d9114d32187eec17e5566f35807ed9bd3cc982b8cfe0c389bf72af6ef6679e,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714349351190899107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c0243bf3189cc2b4d6f357927410147bcabb14e3ca640327ff4909ec5d3814f,PodSandboxId:c39536af7def5fde4f18905cb572ec9d55b6bd50b254affafe0adcc82fb84a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714349346815077655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90230580bb8966b7fadfe92ba2a2195539fb6f674e409ec35f0dd02caefbf3bd,PodSandboxId:f72b9f6f6c6571f796d2f7e6082ee8d09f14b4e5d3c2668410288857008b3e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714349327340739238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b390e5e039b165a1793386b9ae3070,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:8b48a4004872d042c17a9da9d3e7497ebe9189415f3b97d651548e9f13d34c93,PodSandboxId:a9733b733641b82e35160f8b58f159969e4643b9a913bb0611d50ca82f550bc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714349313770969390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:0503917a1377777577015db4d0f48982e9b923c054e7727b257b6a6393c065f9,PodSandboxId:c29249a598bab8b65fe595d7748941377b5ed5da05e40d06fb7a192fcda58554,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313835630031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0fee281fb306c003ec9a71f9157ad69f8109a929efb701c3cd0ef9ee13c8ed,PodSandboxId:fbde7716d2c883035f5a1a77f8a386da7c83f981e74e07553f99b894451030b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313671699171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7fcfc456098f3763f49107505a52c0b80da11b3e9ee44354ed1edd20c7d5aed,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714349313659367759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714349313316428908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b573af7fe461ed3d8be8b298f3a913f7feda077f922877ea319297042d060e06,PodSandboxId:e244f4be4872c53f75efd9d9faadfd20cfbce91c6db7d0406fe33f4cfd429534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714349313516599131,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e97305
0889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a413dc9a5467e299b2594817dbaa37417dcd420f092104ce5e713101001ee224,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714349313460341969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a
88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5697620f655f6994310596c760aac93c16f112f25bd6c63bba0f603ccfe2983a,PodSandboxId:ae92a5ea253f1670e63f4a78d88b6f655ef42b85d459cf32473db6409d9ab5a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714349313372516151,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Ann
otations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714349307274996244,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kuber
netes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714348823628735354,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kuberne
tes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661894139184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661892853715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714348659691787871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1714348640048415540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedA
t:1714348639992830886,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab5a657a-685b-4d58-9655-d255a027d8df name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.178148611Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8643a3d-0e47-475c-9447-fd75fab4cafd name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.182832487Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c39536af7def5fde4f18905cb572ec9d55b6bd50b254affafe0adcc82fb84a25,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-wwl6p,Uid:a6a06956-e991-47ab-986f-34d9467a7dec,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349346668765964,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T00:00:20.232551293Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f72b9f6f6c6571f796d2f7e6082ee8d09f14b4e5d3c2668410288857008b3e64,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-274394,Uid:b2b390e5e039b165a1793386b9ae3070,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1714349327231550694,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b390e5e039b165a1793386b9ae3070,},Annotations:map[string]string{kubernetes.io/config.hash: b2b390e5e039b165a1793386b9ae3070,kubernetes.io/config.seen: 2024-04-29T00:08:26.872175224Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c29249a598bab8b65fe595d7748941377b5ed5da05e40d06fb7a192fcda58554,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rslhx,Uid:b73501ce-7591-45a5-b59e-331f7752c71b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349313007497659,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04
-28T23:57:41.185036167Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fbde7716d2c883035f5a1a77f8a386da7c83f981e74e07553f99b894451030b1,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xkdcv,Uid:60272694-edd8-4a8c-abd9-707cdb1355ea,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312961751769,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-28T23:57:41.198044791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-274394,Uid:4efe96637929623fb8b0eb26a06bea4f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312923068075,Labels:map[string]strin
g{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.237:8443,kubernetes.io/config.hash: 4efe96637929623fb8b0eb26a06bea4f,kubernetes.io/config.seen: 2024-04-28T23:57:26.124687930Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312914492384,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-28T23:57:41.189571842Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&PodSandboxMetadata{Name:kube-controller-ma
nager-ha-274394,Uid:d48b86fddc4d5249a88aeb3e4377a6f7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312911522739,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d48b86fddc4d5249a88aeb3e4377a6f7,kubernetes.io/config.seen: 2024-04-28T23:57:26.124679832Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9733b733641b82e35160f8b58f159969e4643b9a913bb0611d50ca82f550bc8,Metadata:&PodSandboxMetadata{Name:kube-proxy-pwbfs,Uid:5303f947-6c3f-47b5-b396-33b92049d48f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312901752775,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-28T23:57:38.913288702Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e244f4be4872c53f75efd9d9faadfd20cfbce91c6db7d0406fe33f4cfd429534,Metadata:&PodSandboxMetadata{Name:etcd-ha-274394,Uid:2ada5cad8658d509e973050889a81f40,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312900876787,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.237:2379,kubernetes.io/config.hash: 2ada5cad8658d509e973050889a81f40,kubernetes.io/config.seen: 2024-04-28T23:57:26.124687065Z,kubernetes.io/config.source: file,},RuntimeHa
ndler:,},&PodSandbox{Id:ae92a5ea253f1670e63f4a78d88b6f655ef42b85d459cf32473db6409d9ab5a9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-274394,Uid:d2454bba76a07a5ac0349d2285d97e46,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349312893425142,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d2454bba76a07a5ac0349d2285d97e46,kubernetes.io/config.seen: 2024-04-28T23:57:26.124682909Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&PodSandboxMetadata{Name:kindnet-p6qmw,Uid:528219cb-5850-471c-97de-c31dcca693b1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714349306934681982,Labels:map[string]string{app: kindnet,controlle
r-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-28T23:57:38.921396163Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c8643a3d-0e47-475c-9447-fd75fab4cafd name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.183860000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9798e5c-cdde-413b-97f7-7207e0a6fd84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.184046195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9798e5c-cdde-413b-97f7-7207e0a6fd84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.184722997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb55e9c6d522d396d155ef1215247b959f12655839d45b9f564a878032f33c2f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714349408190899595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a7d4dbe869ca1caad7d20343cff3f78d02cdcb4175e5d816d03039baa9c0fa,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714349367205409094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b7729fd4b49c715b4212dd3334d99f7f4415b91a6e0ad04921eae5d66e2b84,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714349354194822067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d9114d32187eec17e5566f35807ed9bd3cc982b8cfe0c389bf72af6ef6679e,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714349351190899107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c0243bf3189cc2b4d6f357927410147bcabb14e3ca640327ff4909ec5d3814f,PodSandboxId:c39536af7def5fde4f18905cb572ec9d55b6bd50b254affafe0adcc82fb84a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714349346815077655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90230580bb8966b7fadfe92ba2a2195539fb6f674e409ec35f0dd02caefbf3bd,PodSandboxId:f72b9f6f6c6571f796d2f7e6082ee8d09f14b4e5d3c2668410288857008b3e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714349327340739238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b390e5e039b165a1793386b9ae3070,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:8b48a4004872d042c17a9da9d3e7497ebe9189415f3b97d651548e9f13d34c93,PodSandboxId:a9733b733641b82e35160f8b58f159969e4643b9a913bb0611d50ca82f550bc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714349313770969390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:0503917a1377777577015db4d0f48982e9b923c054e7727b257b6a6393c065f9,PodSandboxId:c29249a598bab8b65fe595d7748941377b5ed5da05e40d06fb7a192fcda58554,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313835630031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0fee281fb306c003ec9a71f9157ad69f8109a929efb701c3cd0ef9ee13c8ed,PodSandboxId:fbde7716d2c883035f5a1a77f8a386da7c83f981e74e07553f99b894451030b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313671699171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b573af7fe461ed3d8be8b298f3a913f7feda077f922877ea319297042d060e06,PodSandboxId:e244f4be4872c53f75efd9d9faadfd20cfbce91c6db7d0406fe33f4cfd429534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714349313516599131,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5697620f655f6994310596c760aac93c16f112f25bd6c63bba0f603ccfe2983a,PodSandboxId:ae92a5ea253f1670e63f4a78d88b6f655ef42b85d459cf32473db6409d9ab5a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714349313372516151,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba
76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9798e5c-cdde-413b-97f7-7207e0a6fd84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.233569373Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92904f88-244b-4c0d-9d38-3c5bbf89e071 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.233670820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92904f88-244b-4c0d-9d38-3c5bbf89e071 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.234537781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=891c8569-0e94-4407-b6ce-045655ec0a30 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.235142821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349624235119799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=891c8569-0e94-4407-b6ce-045655ec0a30 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.235601714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a695596d-e8cd-456e-aa6b-9c3bc10069b2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.235684817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a695596d-e8cd-456e-aa6b-9c3bc10069b2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.236149733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb55e9c6d522d396d155ef1215247b959f12655839d45b9f564a878032f33c2f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714349408190899595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a7d4dbe869ca1caad7d20343cff3f78d02cdcb4175e5d816d03039baa9c0fa,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714349367205409094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b7729fd4b49c715b4212dd3334d99f7f4415b91a6e0ad04921eae5d66e2b84,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714349354194822067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d9114d32187eec17e5566f35807ed9bd3cc982b8cfe0c389bf72af6ef6679e,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714349351190899107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c0243bf3189cc2b4d6f357927410147bcabb14e3ca640327ff4909ec5d3814f,PodSandboxId:c39536af7def5fde4f18905cb572ec9d55b6bd50b254affafe0adcc82fb84a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714349346815077655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90230580bb8966b7fadfe92ba2a2195539fb6f674e409ec35f0dd02caefbf3bd,PodSandboxId:f72b9f6f6c6571f796d2f7e6082ee8d09f14b4e5d3c2668410288857008b3e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714349327340739238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b390e5e039b165a1793386b9ae3070,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:8b48a4004872d042c17a9da9d3e7497ebe9189415f3b97d651548e9f13d34c93,PodSandboxId:a9733b733641b82e35160f8b58f159969e4643b9a913bb0611d50ca82f550bc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714349313770969390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:0503917a1377777577015db4d0f48982e9b923c054e7727b257b6a6393c065f9,PodSandboxId:c29249a598bab8b65fe595d7748941377b5ed5da05e40d06fb7a192fcda58554,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313835630031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0fee281fb306c003ec9a71f9157ad69f8109a929efb701c3cd0ef9ee13c8ed,PodSandboxId:fbde7716d2c883035f5a1a77f8a386da7c83f981e74e07553f99b894451030b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313671699171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7fcfc456098f3763f49107505a52c0b80da11b3e9ee44354ed1edd20c7d5aed,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714349313659367759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714349313316428908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b573af7fe461ed3d8be8b298f3a913f7feda077f922877ea319297042d060e06,PodSandboxId:e244f4be4872c53f75efd9d9faadfd20cfbce91c6db7d0406fe33f4cfd429534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714349313516599131,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e97305
0889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a413dc9a5467e299b2594817dbaa37417dcd420f092104ce5e713101001ee224,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714349313460341969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a
88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5697620f655f6994310596c760aac93c16f112f25bd6c63bba0f603ccfe2983a,PodSandboxId:ae92a5ea253f1670e63f4a78d88b6f655ef42b85d459cf32473db6409d9ab5a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714349313372516151,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Ann
otations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714349307274996244,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kuber
netes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714348823628735354,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kuberne
tes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661894139184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661892853715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714348659691787871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1714348640048415540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedA
t:1714348639992830886,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a695596d-e8cd-456e-aa6b-9c3bc10069b2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.289071878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=878b2137-dda5-4159-b7a9-c4076c5fb52f name=/runtime.v1.RuntimeService/Version
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.289178862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=878b2137-dda5-4159-b7a9-c4076c5fb52f name=/runtime.v1.RuntimeService/Version
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.290776107Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ed6e7d0-21d8-4179-a587-9ae58c2f1649 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.291422872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714349624291398511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ed6e7d0-21d8-4179-a587-9ae58c2f1649 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.292148731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=006db205-fe1b-4d66-a3a8-1190bb915957 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.292232148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=006db205-fe1b-4d66-a3a8-1190bb915957 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:13:44 ha-274394 crio[3921]: time="2024-04-29 00:13:44.292617431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb55e9c6d522d396d155ef1215247b959f12655839d45b9f564a878032f33c2f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714349408190899595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a7d4dbe869ca1caad7d20343cff3f78d02cdcb4175e5d816d03039baa9c0fa,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714349367205409094,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kubernetes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b7729fd4b49c715b4212dd3334d99f7f4415b91a6e0ad04921eae5d66e2b84,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714349354194822067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d9114d32187eec17e5566f35807ed9bd3cc982b8cfe0c389bf72af6ef6679e,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714349351190899107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c0243bf3189cc2b4d6f357927410147bcabb14e3ca640327ff4909ec5d3814f,PodSandboxId:c39536af7def5fde4f18905cb572ec9d55b6bd50b254affafe0adcc82fb84a25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714349346815077655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kubernetes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90230580bb8966b7fadfe92ba2a2195539fb6f674e409ec35f0dd02caefbf3bd,PodSandboxId:f72b9f6f6c6571f796d2f7e6082ee8d09f14b4e5d3c2668410288857008b3e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714349327340739238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b390e5e039b165a1793386b9ae3070,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:8b48a4004872d042c17a9da9d3e7497ebe9189415f3b97d651548e9f13d34c93,PodSandboxId:a9733b733641b82e35160f8b58f159969e4643b9a913bb0611d50ca82f550bc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714349313770969390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:0503917a1377777577015db4d0f48982e9b923c054e7727b257b6a6393c065f9,PodSandboxId:c29249a598bab8b65fe595d7748941377b5ed5da05e40d06fb7a192fcda58554,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313835630031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0fee281fb306c003ec9a71f9157ad69f8109a929efb701c3cd0ef9ee13c8ed,PodSandboxId:fbde7716d2c883035f5a1a77f8a386da7c83f981e74e07553f99b894451030b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714349313671699171,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7fcfc456098f3763f49107505a52c0b80da11b3e9ee44354ed1edd20c7d5aed,PodSandboxId:8b3ab43b76165178f2b2c88508090c481ca1a2355b7099dd168f7eeb99dc5399,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714349313659367759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-274394,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 4efe96637929623fb8b0eb26a06bea4f,},Annotations:map[string]string{io.kubernetes.container.hash: a5ad7bfd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f,PodSandboxId:4606d15628a337db840424de066227a17f44a68351499a36ca21454557681aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714349313316428908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: b291d6ca-3a9b-4dd0-b0e9-a183347e7d26,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5d77a8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b573af7fe461ed3d8be8b298f3a913f7feda077f922877ea319297042d060e06,PodSandboxId:e244f4be4872c53f75efd9d9faadfd20cfbce91c6db7d0406fe33f4cfd429534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714349313516599131,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e97305
0889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a413dc9a5467e299b2594817dbaa37417dcd420f092104ce5e713101001ee224,PodSandboxId:72a5aebc541111631ff2db760fe78cc551e9faacd55cf2f81a66eaf67e83a635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714349313460341969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48b86fddc4d5249a
88aeb3e4377a6f7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5697620f655f6994310596c760aac93c16f112f25bd6c63bba0f603ccfe2983a,PodSandboxId:ae92a5ea253f1670e63f4a78d88b6f655ef42b85d459cf32473db6409d9ab5a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714349313372516151,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Ann
otations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e,PodSandboxId:ae51d2fad4a666ad92cebcc9343729226e69d24a133bc86f28b91ca208ed6dae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714349307274996244,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p6qmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528219cb-5850-471c-97de-c31dcca693b1,},Annotations:map[string]string{io.kuber
netes.container.hash: d7a973a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6191db59237ab5701bd22fdbebbfb801f06631a5e7adbe153635a5d5505cede2,PodSandboxId:7dc34422a092b4ee7a5d73148d4ee7273897e70a1fd7f51920f26f1c2f010f94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714348823628735354,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwl6p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6a06956-e991-47ab-986f-34d9467a7dec,},Annotations:map[string]string{io.kuberne
tes.container.hash: e17bc7e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e,PodSandboxId:86b45c3768b5ce0dc71cc92ca53e5f9e28841be8bee91593f00e33ba9337c436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661894139184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rslhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73501ce-7591-45a5-b59e-331f7752c71b,},Annotations:map[string]string{io.kubernetes.container.hash: f16d87b2,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a,PodSandboxId:0a16b0222b33491cc9c639831cc8810dd6cdd337e8669fda79d7ae69e2d92488,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714348661892853715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-xkdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60272694-edd8-4a8c-abd9-707cdb1355ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6973ef85,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a,PodSandboxId:fe59c57afd7dc0af2da87067f453be22228034ce68bbded8860c92e32aa9dc9f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714348659691787871,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pwbfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5303f947-6c3f-47b5-b396-33b92049d48f,},Annotations:map[string]string{io.kubernetes.container.hash: 8078f455,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6,PodSandboxId:9792afe7047dae43a78dc12b63c4a105d977d026e12916156d28c16389393f73,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1714348640048415540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ada5cad8658d509e973050889a81f40,},Annotations:map[string]string{io.kubernetes.container.hash: 2b479516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1,PodSandboxId:fb9c09a8e560907445f7713f1c19f9d784bfebb81837b831f5377dad0605a444,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedA
t:1714348639992830886,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-274394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2454bba76a07a5ac0349d2285d97e46,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=006db205-fe1b-4d66-a3a8-1190bb915957 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb55e9c6d522d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       5                   4606d15628a33       storage-provisioner
	b6a7d4dbe869c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   ae51d2fad4a66       kindnet-p6qmw
	d4b7729fd4b49       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      4 minutes ago       Running             kube-apiserver            3                   8b3ab43b76165       kube-apiserver-ha-274394
	35d9114d32187       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      4 minutes ago       Running             kube-controller-manager   2                   72a5aebc54111       kube-controller-manager-ha-274394
	3c0243bf3189c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   c39536af7def5       busybox-fc5497c4f-wwl6p
	90230580bb896       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago       Running             kube-vip                  0                   f72b9f6f6c657       kube-vip-ha-274394
	0503917a13777       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   c29249a598bab       coredns-7db6d8ff4d-rslhx
	8b48a4004872d       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                1                   a9733b733641b       kube-proxy-pwbfs
	8c0fee281fb30       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   fbde7716d2c88       coredns-7db6d8ff4d-xkdcv
	b7fcfc456098f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Exited              kube-apiserver            2                   8b3ab43b76165       kube-apiserver-ha-274394
	b573af7fe461e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   e244f4be4872c       etcd-ha-274394
	a413dc9a5467e       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Exited              kube-controller-manager   1                   72a5aebc54111       kube-controller-manager-ha-274394
	5697620f655f6       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      5 minutes ago       Running             kube-scheduler            1                   ae92a5ea253f1       kube-scheduler-ha-274394
	95153ebb81f24       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       4                   4606d15628a33       storage-provisioner
	75b0b6d5d9883       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   ae51d2fad4a66       kindnet-p6qmw
	6191db59237ab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   7dc34422a092b       busybox-fc5497c4f-wwl6p
	39cef99138b5e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   86b45c3768b5c       coredns-7db6d8ff4d-rslhx
	4b75dd2cf8167       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   0a16b0222b334       coredns-7db6d8ff4d-xkdcv
	10c90fba42aa7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      16 minutes ago      Exited              kube-proxy                0                   fe59c57afd7dc       kube-proxy-pwbfs
	a2665b4434106       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   9792afe7047da       etcd-ha-274394
	cd7d63b0cf58d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      16 minutes ago      Exited              kube-scheduler            0                   fb9c09a8e5609       kube-scheduler-ha-274394
	
	
	==> coredns [0503917a1377777577015db4d0f48982e9b923c054e7727b257b6a6393c065f9] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1206273367]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 00:08:40.785) (total time: 10001ms):
	Trace[1206273367]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:08:50.787)
	Trace[1206273367]: [10.001745644s] [10.001745644s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [39cef99138b5e744de9aa64d56030b084067eb499318ab415c5d05eb896d5a5e] <==
	[INFO] 10.244.1.2:60722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098874s
	[INFO] 10.244.0.4:48543 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092957s
	[INFO] 10.244.0.4:57804 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001823584s
	[INFO] 10.244.0.4:33350 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106647s
	[INFO] 10.244.0.4:39835 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220436s
	[INFO] 10.244.0.4:34474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060725s
	[INFO] 10.244.0.4:42677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076278s
	[INFO] 10.244.2.2:41566 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146322s
	[INFO] 10.244.2.2:39633 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160447s
	[INFO] 10.244.2.2:36533 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123881s
	[INFO] 10.244.1.2:54710 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162932s
	[INFO] 10.244.1.2:59010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096219s
	[INFO] 10.244.1.2:39468 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158565s
	[INFO] 10.244.0.4:45378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179168s
	[INFO] 10.244.0.4:52678 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091044s
	[INFO] 10.244.2.2:46078 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195018s
	[INFO] 10.244.2.2:47504 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268349s
	[INFO] 10.244.1.2:34168 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000161101s
	[INFO] 10.244.0.4:52891 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148878s
	[INFO] 10.244.0.4:43079 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155917s
	[INFO] 10.244.0.4:46898 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114218s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1856&timeout=8m13s&timeoutSeconds=493&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1867&timeout=9m35s&timeoutSeconds=575&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [4b75dd2cf81672ae9e90ddaf07985754f57d211ce36d7def309b116cb939e12a] <==
	[INFO] 10.244.0.4:54740 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000078827s
	[INFO] 10.244.0.4:52614 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00194917s
	[INFO] 10.244.2.2:33162 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162402s
	[INFO] 10.244.2.2:57592 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.023066556s
	[INFO] 10.244.2.2:57043 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000235049s
	[INFO] 10.244.1.2:47075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014599s
	[INFO] 10.244.1.2:60870 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002072779s
	[INFO] 10.244.1.2:46861 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094825s
	[INFO] 10.244.1.2:46908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186676s
	[INFO] 10.244.0.4:60188 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001709235s
	[INFO] 10.244.0.4:43834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109382s
	[INFO] 10.244.2.2:42186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000296079s
	[INFO] 10.244.1.2:44715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184251s
	[INFO] 10.244.0.4:45543 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116414s
	[INFO] 10.244.0.4:47556 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083226s
	[INFO] 10.244.2.2:59579 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198403s
	[INFO] 10.244.2.2:42196 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000278968s
	[INFO] 10.244.1.2:34121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222019s
	[INFO] 10.244.1.2:54334 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016838s
	[INFO] 10.244.1.2:37434 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099473s
	[INFO] 10.244.0.4:58711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000413259s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1856&timeout=5m55s&timeoutSeconds=355&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1854&timeout=9m2s&timeoutSeconds=542&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [8c0fee281fb306c003ec9a71f9157ad69f8109a929efb701c3cd0ef9ee13c8ed] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55588->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[999540786]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 00:08:45.833) (total time: 10924ms):
	Trace[999540786]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55588->10.96.0.1:443: read: connection reset by peer 10924ms (00:08:56.758)
	Trace[999540786]: [10.924375289s] [10.924375289s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55588->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-274394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_28T23_57_27_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:57:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:13:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:09:17 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:09:17 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:09:17 +0000   Sun, 28 Apr 2024 23:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:09:17 +0000   Sun, 28 Apr 2024 23:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    ha-274394
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbc86a402e5548caa48d259a39be78de
	  System UUID:                bbc86a40-2e55-48ca-a48d-259a39be78de
	  Boot ID:                    b8dfffb5-63e7-4c7e-8e52-3cf4873fed01
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwl6p              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-rslhx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-xkdcv             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-274394                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-p6qmw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-274394             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-274394    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-pwbfs                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-274394             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-274394                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m26s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x7 over 16m)      kubelet          Node ha-274394 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x6 over 16m)      kubelet          Node ha-274394 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x6 over 16m)      kubelet          Node ha-274394 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-274394 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-274394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-274394 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                    node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-274394 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Warning  ContainerGCFailed        5m18s (x2 over 6m18s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-274394 event: Registered Node ha-274394 in Controller
	
	
	Name:               ha-274394-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_28T23_58_39_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 28 Apr 2024 23:58:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:13:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:10:01 +0000   Mon, 29 Apr 2024 00:09:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:10:01 +0000   Mon, 29 Apr 2024 00:09:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:10:01 +0000   Mon, 29 Apr 2024 00:09:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:10:01 +0000   Mon, 29 Apr 2024 00:09:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-274394-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b55609ff590f4bdba17fff0e954879c9
	  System UUID:                b55609ff-590f-4bdb-a17f-ff0e954879c9
	  Boot ID:                    54f50319-7460-41a7-a5f8-ad51d6817779
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tmk6v                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-274394-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-6qf7q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-274394-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-274394-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-g95c9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-274394-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-274394-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-274394-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-274394-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-274394-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-274394-m02 status is now: NodeNotReady
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node ha-274394-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node ha-274394-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node ha-274394-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-274394-m02 event: Registered Node ha-274394-m02 in Controller
	
	
	Name:               ha-274394-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-274394-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=ha-274394
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T00_00_59_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:00:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-274394-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:11:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 00:10:55 +0000   Mon, 29 Apr 2024 00:11:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 00:10:55 +0000   Mon, 29 Apr 2024 00:11:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 00:10:55 +0000   Mon, 29 Apr 2024 00:11:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 00:10:55 +0000   Mon, 29 Apr 2024 00:11:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    ha-274394-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eda4c6845a404536baab34c56e482672
	  System UUID:                eda4c684-5a40-4536-baab-34c56e482672
	  Boot ID:                    bbb756fc-2b7d-430c-ac26-49a753bf4a63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pnlw7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-r7wp2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-4h24n           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-274394-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-274394-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-274394-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-274394-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   NodeNotReady             3m40s                  node-controller  Node ha-274394-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-274394-m04 event: Registered Node ha-274394-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m49s (x2 over 2m49s)  kubelet          Node ha-274394-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x2 over 2m49s)  kubelet          Node ha-274394-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x2 over 2m49s)  kubelet          Node ha-274394-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s                  kubelet          Node ha-274394-m04 has been rebooted, boot id: bbb756fc-2b7d-430c-ac26-49a753bf4a63
	  Normal   NodeReady                2m49s                  kubelet          Node ha-274394-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-274394-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.062174] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072067] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.188727] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.118445] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.277590] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +5.051195] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.066175] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.782579] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.939635] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.597447] systemd-fstab-generator[1372]: Ignoring "noauto" option for root device
	[  +0.110049] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.496389] kauditd_printk_skb: 21 callbacks suppressed
	[Apr28 23:58] kauditd_printk_skb: 74 callbacks suppressed
	[Apr29 00:05] kauditd_printk_skb: 1 callbacks suppressed
	[Apr29 00:08] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[  +0.155395] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +0.189653] systemd-fstab-generator[3862]: Ignoring "noauto" option for root device
	[  +0.159042] systemd-fstab-generator[3875]: Ignoring "noauto" option for root device
	[  +0.310162] systemd-fstab-generator[3903]: Ignoring "noauto" option for root device
	[  +1.285716] systemd-fstab-generator[4012]: Ignoring "noauto" option for root device
	[  +5.935878] kauditd_printk_skb: 132 callbacks suppressed
	[ +10.390575] kauditd_printk_skb: 87 callbacks suppressed
	[ +12.102672] kauditd_printk_skb: 2 callbacks suppressed
	[Apr29 00:09] kauditd_printk_skb: 5 callbacks suppressed
	[ +17.302728] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [a2665b4434106a2d34b98c1b4039e0e7f884ea1c8cf13bd5616857e99a0237a6] <==
	{"level":"warn","ts":"2024-04-29T00:06:53.439485Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:06:44.82249Z","time spent":"8.616986499s","remote":"127.0.0.1:36576","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:500 "}
	2024/04/29 00:06:53 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-29T00:06:53.429449Z","caller":"traceutil/trace.go:171","msg":"trace[2112290400] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"1.148304313s","start":"2024-04-29T00:06:52.281137Z","end":"2024-04-29T00:06:53.429442Z","steps":["trace[2112290400] 'agreement among raft nodes before linearized reading'  (duration: 1.128974947s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:06:53.439558Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T00:06:52.281044Z","time spent":"1.158507029s","remote":"127.0.0.1:36494","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	2024/04/29 00:06:53 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-29T00:06:53.494577Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.237:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:06:53.494643Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.237:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T00:06:53.496045Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3f0f97df8a50e0be","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-29T00:06:53.496263Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496339Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496406Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496601Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496682Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496716Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496726Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"18aaab02e1f36e7"}
	{"level":"info","ts":"2024-04-29T00:06:53.496732Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.49674Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.49679Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.496882Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.497037Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.497105Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.497166Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:06:53.500972Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.237:2380"}
	{"level":"info","ts":"2024-04-29T00:06:53.501111Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.237:2380"}
	{"level":"info","ts":"2024-04-29T00:06:53.501146Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-274394","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.237:2380"],"advertise-client-urls":["https://192.168.39.237:2379"]}
	
	
	==> etcd [b573af7fe461ed3d8be8b298f3a913f7feda077f922877ea319297042d060e06] <==
	{"level":"warn","ts":"2024-04-29T00:10:29.267339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.758166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ha-274394-m03\" ","response":"range_response_count:1 size:7025"}
	{"level":"info","ts":"2024-04-29T00:10:29.267421Z","caller":"traceutil/trace.go:171","msg":"trace[1111362850] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ha-274394-m03; range_end:; response_count:1; response_revision:2446; }","duration":"116.866213ms","start":"2024-04-29T00:10:29.150534Z","end":"2024-04-29T00:10:29.2674Z","steps":["trace[1111362850] 'agreement among raft nodes before linearized reading'  (duration: 87.593329ms)","trace[1111362850] 'range keys from in-memory index tree'  (duration: 29.146552ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:10:29.267725Z","caller":"traceutil/trace.go:171","msg":"trace[1020126397] transaction","detail":"{read_only:false; response_revision:2447; number_of_response:1; }","duration":"117.365662ms","start":"2024-04-29T00:10:29.150349Z","end":"2024-04-29T00:10:29.267715Z","steps":["trace[1020126397] 'process raft request'  (duration: 87.22677ms)","trace[1020126397] 'compare'  (duration: 29.790081ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:10:59.044526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.860865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-4h24n\" ","response":"range_response_count:1 size:4997"}
	{"level":"info","ts":"2024-04-29T00:10:59.044746Z","caller":"traceutil/trace.go:171","msg":"trace[851947625] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-4h24n; range_end:; response_count:1; response_revision:2552; }","duration":"201.124342ms","start":"2024-04-29T00:10:58.843605Z","end":"2024-04-29T00:10:59.044729Z","steps":["trace[851947625] 'range keys from in-memory index tree'  (duration: 199.880435ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:11:10.085869Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.250:33128","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-04-29T00:11:10.114193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be switched to configuration voters=(111088814488762087 4544017535394177214)"}
	{"level":"info","ts":"2024-04-29T00:11:10.118007Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"db2c13b3d7f66f6a","local-member-id":"3f0f97df8a50e0be","removed-remote-peer-id":"76ea7d5cdc93362b","removed-remote-peer-urls":["https://192.168.39.250:2380"]}
	{"level":"info","ts":"2024-04-29T00:11:10.118286Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"warn","ts":"2024-04-29T00:11:10.118568Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:11:10.118854Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"warn","ts":"2024-04-29T00:11:10.118762Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"3f0f97df8a50e0be","removed-member-id":"76ea7d5cdc93362b"}
	{"level":"warn","ts":"2024-04-29T00:11:10.119123Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-04-29T00:11:10.119519Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:11:10.119583Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:11:10.119638Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"warn","ts":"2024-04-29T00:11:10.119866Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b","error":"context canceled"}
	{"level":"warn","ts":"2024-04-29T00:11:10.120102Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"76ea7d5cdc93362b","error":"failed to read 76ea7d5cdc93362b on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-29T00:11:10.120141Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"warn","ts":"2024-04-29T00:11:10.120435Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b","error":"context canceled"}
	{"level":"info","ts":"2024-04-29T00:11:10.120547Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3f0f97df8a50e0be","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:11:10.120667Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"info","ts":"2024-04-29T00:11:10.120687Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"3f0f97df8a50e0be","removed-remote-peer-id":"76ea7d5cdc93362b"}
	{"level":"warn","ts":"2024-04-29T00:11:10.137293Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"3f0f97df8a50e0be","remote-peer-id-stream-handler":"3f0f97df8a50e0be","remote-peer-id-from":"76ea7d5cdc93362b"}
	{"level":"warn","ts":"2024-04-29T00:11:10.139026Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"3f0f97df8a50e0be","remote-peer-id-stream-handler":"3f0f97df8a50e0be","remote-peer-id-from":"76ea7d5cdc93362b"}
	
	
	==> kernel <==
	 00:13:45 up 16 min,  0 users,  load average: 0.34, 0.56, 0.39
	Linux ha-274394 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [75b0b6d5d9883868f1997a69054d1494e4789344348fd2ac60913a0b118de24e] <==
	I0429 00:08:27.731354       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0429 00:08:28.124214       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0429 00:08:28.124504       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0429 00:08:32.181370       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 00:08:35.254210       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 00:08:38.326339       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [b6a7d4dbe869ca1caad7d20343cff3f78d02cdcb4175e5d816d03039baa9c0fa] <==
	I0429 00:12:58.748525       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:13:08.764311       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:13:08.764406       1 main.go:227] handling current node
	I0429 00:13:08.764431       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:13:08.764448       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:13:08.764564       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:13:08.764584       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:13:18.780700       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:13:18.780832       1 main.go:227] handling current node
	I0429 00:13:18.780975       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:13:18.781046       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:13:18.781359       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:13:18.781435       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:13:28.792331       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:13:28.792444       1 main.go:227] handling current node
	I0429 00:13:28.792479       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:13:28.792506       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:13:28.792663       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:13:28.792698       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	I0429 00:13:38.811251       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0429 00:13:38.811496       1 main.go:227] handling current node
	I0429 00:13:38.811548       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I0429 00:13:38.811571       1 main.go:250] Node ha-274394-m02 has CIDR [10.244.1.0/24] 
	I0429 00:13:38.811696       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0429 00:13:38.811716       1 main.go:250] Node ha-274394-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b7fcfc456098f3763f49107505a52c0b80da11b3e9ee44354ed1edd20c7d5aed] <==
	I0429 00:08:34.362189       1 options.go:221] external host was not specified, using 192.168.39.237
	I0429 00:08:34.367138       1 server.go:148] Version: v1.30.0
	I0429 00:08:34.367281       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:35.082154       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0429 00:08:35.082233       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0429 00:08:35.082406       1 instance.go:299] Using reconciler: lease
	I0429 00:08:35.082858       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0429 00:08:35.083083       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0429 00:08:55.080166       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0429 00:08:55.080180       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0429 00:08:55.083604       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [d4b7729fd4b49c715b4212dd3334d99f7f4415b91a6e0ad04921eae5d66e2b84] <==
	I0429 00:09:16.079761       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0429 00:09:16.079776       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0429 00:09:16.114029       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 00:09:16.116185       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 00:09:16.116223       1 policy_source.go:224] refreshing policies
	I0429 00:09:16.152859       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 00:09:16.152956       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 00:09:16.153079       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 00:09:16.153172       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 00:09:16.153889       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 00:09:16.154045       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 00:09:16.154095       1 aggregator.go:165] initial CRD sync complete...
	I0429 00:09:16.154109       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 00:09:16.154139       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 00:09:16.154145       1 cache.go:39] Caches are synced for autoregister controller
	I0429 00:09:16.154971       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 00:09:16.160455       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0429 00:09:16.172743       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.250 192.168.39.43]
	I0429 00:09:16.174566       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 00:09:16.185602       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0429 00:09:16.190181       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0429 00:09:16.202716       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 00:09:17.063360       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0429 00:09:17.513621       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.237 192.168.39.250 192.168.39.43]
	W0429 00:09:27.514104       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.237 192.168.39.43]
	
	
	==> kube-controller-manager [35d9114d32187eec17e5566f35807ed9bd3cc982b8cfe0c389bf72af6ef6679e] <==
	E0429 00:11:06.909020       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0429 00:11:06.956571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.035421ms"
	I0429 00:11:06.956714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.947µs"
	I0429 00:11:06.981734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.619411ms"
	I0429 00:11:06.982181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.79µs"
	I0429 00:11:08.845313       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.941µs"
	I0429 00:11:08.928501       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.726µs"
	I0429 00:11:08.954494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.146µs"
	I0429 00:11:08.964330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.697µs"
	I0429 00:11:09.923532       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.405968ms"
	I0429 00:11:09.923679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.441µs"
	I0429 00:11:21.802532       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-274394-m04"
	E0429 00:11:21.840990       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"ha-274394-m03", UID:"d163584d-cf07-4161-bfd8-83dca189e54e", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_
:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-274394-m03", UID:"dbd95792-1d6b-4199-83b4-c4a2f302dde4", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io "ha-274394-m03" not found
	E0429 00:11:28.810419       1 gc_controller.go:153] "Failed to get node" err="node \"ha-274394-m03\" not found" logger="pod-garbage-collector-controller" node="ha-274394-m03"
	E0429 00:11:28.810538       1 gc_controller.go:153] "Failed to get node" err="node \"ha-274394-m03\" not found" logger="pod-garbage-collector-controller" node="ha-274394-m03"
	E0429 00:11:28.810571       1 gc_controller.go:153] "Failed to get node" err="node \"ha-274394-m03\" not found" logger="pod-garbage-collector-controller" node="ha-274394-m03"
	E0429 00:11:28.810594       1 gc_controller.go:153] "Failed to get node" err="node \"ha-274394-m03\" not found" logger="pod-garbage-collector-controller" node="ha-274394-m03"
	E0429 00:11:28.810617       1 gc_controller.go:153] "Failed to get node" err="node \"ha-274394-m03\" not found" logger="pod-garbage-collector-controller" node="ha-274394-m03"
	E0429 00:11:48.811595       1 gc_controller.go:153] "Failed to get node" err="node \"ha-274394-m03\" not found" logger="pod-garbage-collector-controller" node="ha-274394-m03"
	E0429 00:11:48.811712       1 gc_controller.go:153] "Failed to get node" err="node \"ha-274394-m03\" not found" logger="pod-garbage-collector-controller" node="ha-274394-m03"
	E0429 00:11:48.811739       1 gc_controller.go:153] "Failed to get node" err="node \"ha-274394-m03\" not found" logger="pod-garbage-collector-controller" node="ha-274394-m03"
	E0429 00:11:48.811764       1 gc_controller.go:153] "Failed to get node" err="node \"ha-274394-m03\" not found" logger="pod-garbage-collector-controller" node="ha-274394-m03"
	E0429 00:11:48.811788       1 gc_controller.go:153] "Failed to get node" err="node \"ha-274394-m03\" not found" logger="pod-garbage-collector-controller" node="ha-274394-m03"
	I0429 00:11:58.903517       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.185358ms"
	I0429 00:11:58.903774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.166µs"
	
	
	==> kube-controller-manager [a413dc9a5467e299b2594817dbaa37417dcd420f092104ce5e713101001ee224] <==
	I0429 00:08:35.148046       1 serving.go:380] Generated self-signed cert in-memory
	I0429 00:08:35.809972       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0429 00:08:35.810030       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:08:35.811993       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 00:08:35.812139       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 00:08:35.812725       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 00:08:35.812805       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0429 00:08:56.091659       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.237:8443/healthz\": dial tcp 192.168.39.237:8443: connect: connection refused"
	
	
	==> kube-proxy [10c90fba42aa799b9c352fe0fc65dba46f9338c9cf37408b442e6ed460a38f2a] <==
	E0429 00:05:36.758010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:39.829319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:39.829412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:39.829604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:39.829670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:39.829892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:39.830076       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:45.975117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:45.975306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:45.975241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:45.975450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:45.975383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:45.975545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:05:58.261311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:05:58.261384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:06:01.334057       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:06:01.334517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:06:01.335100       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:06:01.335164       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:06:16.695150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:06:16.695472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-274394&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:06:19.766738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:06:19.766972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 00:06:19.766768       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 00:06:19.767245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [8b48a4004872d042c17a9da9d3e7497ebe9189415f3b97d651548e9f13d34c93] <==
	I0429 00:08:35.583826       1 server_linux.go:69] "Using iptables proxy"
	E0429 00:08:38.006583       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 00:08:41.078493       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 00:08:44.149707       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 00:08:50.294133       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 00:08:59.511226       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 00:09:17.943528       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-274394\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0429 00:09:17.943708       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0429 00:09:18.104166       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:09:18.105346       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:09:18.105472       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:09:18.128421       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:09:18.128649       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:09:18.128691       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:09:18.130853       1 config.go:192] "Starting service config controller"
	I0429 00:09:18.130973       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:09:18.131015       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:09:18.131019       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:09:18.132528       1 config.go:319] "Starting node config controller"
	I0429 00:09:18.132563       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:09:18.231053       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:09:18.231395       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:09:18.234614       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5697620f655f6994310596c760aac93c16f112f25bd6c63bba0f603ccfe2983a] <==
	W0429 00:09:11.145300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:11.145386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:11.442399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.237:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:11.442524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.237:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:11.652470       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.237:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:11.652691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.237:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:11.995417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.237:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:11.995505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.237:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:12.209458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.237:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:12.209574       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.237:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:12.277354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.237:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:12.277435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.237:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:12.569381       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:12.569501       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:12.849032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.237:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:12.849148       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.237:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:12.945562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.237:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:12.945712       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.237:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:13.138204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.237:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:13.138291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.237:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:13.549813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E0429 00:09:13.549991       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W0429 00:09:16.092545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:09:16.092981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0429 00:09:16.199563       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cd7d63b0cf58d14e6790389b3cd5cf1a8008f4a196309ede930b89edcd473ca1] <==
	W0429 00:06:48.633181       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 00:06:48.633287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:06:48.755489       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 00:06:48.755552       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 00:06:48.792823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:06:48.793014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:06:48.859716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:06:48.859812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:06:49.347686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:06:49.347719       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:06:51.341388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:06:51.341552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:06:51.612108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:06:51.612224       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 00:06:51.681854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:06:51.682041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:06:52.676215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:06:52.676254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:06:52.733717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 00:06:52.733783       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:06:52.936896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 00:06:52.937008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 00:06:53.173884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 00:06:53.174089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 00:06:53.386241       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 00:09:57 ha-274394 kubelet[1379]: E0429 00:09:57.176295    1379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b291d6ca-3a9b-4dd0-b0e9-a183347e7d26)\"" pod="kube-system/storage-provisioner" podUID="b291d6ca-3a9b-4dd0-b0e9-a183347e7d26"
	Apr 29 00:10:08 ha-274394 kubelet[1379]: I0429 00:10:08.177474    1379 scope.go:117] "RemoveContainer" containerID="95153ebb81f243f46bb9d0f3ca059901d3a2c0238754767b78fb8737eacf272f"
	Apr 29 00:10:18 ha-274394 kubelet[1379]: I0429 00:10:18.176493    1379 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-274394" podUID="ce6151de-754a-4f15-94d4-71f4fb9cbd21"
	Apr 29 00:10:18 ha-274394 kubelet[1379]: I0429 00:10:18.211458    1379 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-274394"
	Apr 29 00:10:26 ha-274394 kubelet[1379]: I0429 00:10:26.199902    1379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-274394" podStartSLOduration=8.199859389 podStartE2EDuration="8.199859389s" podCreationTimestamp="2024-04-29 00:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 00:10:26.199497465 +0000 UTC m=+780.184309288" watchObservedRunningTime="2024-04-29 00:10:26.199859389 +0000 UTC m=+780.184671213"
	Apr 29 00:10:26 ha-274394 kubelet[1379]: E0429 00:10:26.208453    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:10:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:10:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:10:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:10:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:11:26 ha-274394 kubelet[1379]: E0429 00:11:26.205146    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:11:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:11:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:11:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:11:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:12:26 ha-274394 kubelet[1379]: E0429 00:12:26.205870    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:12:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:12:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:12:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:12:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:13:26 ha-274394 kubelet[1379]: E0429 00:13:26.205424    1379 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:13:26 ha-274394 kubelet[1379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:13:26 ha-274394 kubelet[1379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:13:26 ha-274394 kubelet[1379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:13:26 ha-274394 kubelet[1379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 00:13:43.794951   44975 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17977-13393/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-274394 -n ha-274394
helpers_test.go:261: (dbg) Run:  kubectl --context ha-274394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (308.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-061470
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-061470
E0429 00:30:48.629043   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-061470: exit status 82 (2m2.695708538s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-061470-m03"  ...
	* Stopping node "multinode-061470-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-061470" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-061470 --wait=true -v=8 --alsologtostderr
E0429 00:33:51.675696   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-061470 --wait=true -v=8 --alsologtostderr: (3m2.889051232s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-061470
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-061470 -n multinode-061470
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-061470 logs -n 25: (1.660014889s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp multinode-061470-m02:/home/docker/cp-test.txt                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3750174102/001/cp-test_multinode-061470-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp multinode-061470-m02:/home/docker/cp-test.txt                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470:/home/docker/cp-test_multinode-061470-m02_multinode-061470.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n multinode-061470 sudo cat                                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | /home/docker/cp-test_multinode-061470-m02_multinode-061470.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp multinode-061470-m02:/home/docker/cp-test.txt                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03:/home/docker/cp-test_multinode-061470-m02_multinode-061470-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n multinode-061470-m03 sudo cat                                   | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | /home/docker/cp-test_multinode-061470-m02_multinode-061470-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp testdata/cp-test.txt                                                | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp multinode-061470-m03:/home/docker/cp-test.txt                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3750174102/001/cp-test_multinode-061470-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp multinode-061470-m03:/home/docker/cp-test.txt                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470:/home/docker/cp-test_multinode-061470-m03_multinode-061470.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n multinode-061470 sudo cat                                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | /home/docker/cp-test_multinode-061470-m03_multinode-061470.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp multinode-061470-m03:/home/docker/cp-test.txt                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m02:/home/docker/cp-test_multinode-061470-m03_multinode-061470-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n multinode-061470-m02 sudo cat                                   | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | /home/docker/cp-test_multinode-061470-m03_multinode-061470-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-061470 node stop m03                                                          | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	| node    | multinode-061470 node start                                                             | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-061470                                                                | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC |                     |
	| stop    | -p multinode-061470                                                                     | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC |                     |
	| start   | -p multinode-061470                                                                     | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:31 UTC | 29 Apr 24 00:34 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-061470                                                                | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:34 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 00:31:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 00:31:01.590828   54766 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:31:01.590927   54766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:31:01.590936   54766 out.go:304] Setting ErrFile to fd 2...
	I0429 00:31:01.590940   54766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:31:01.591129   54766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:31:01.591694   54766 out.go:298] Setting JSON to false
	I0429 00:31:01.592598   54766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8006,"bootTime":1714342656,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 00:31:01.592659   54766 start.go:139] virtualization: kvm guest
	I0429 00:31:01.595226   54766 out.go:177] * [multinode-061470] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 00:31:01.596640   54766 out.go:177]   - MINIKUBE_LOCATION=17977
	I0429 00:31:01.596638   54766 notify.go:220] Checking for updates...
	I0429 00:31:01.598072   54766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 00:31:01.599558   54766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0429 00:31:01.600862   54766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:31:01.602210   54766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 00:31:01.603595   54766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 00:31:01.605266   54766 config.go:182] Loaded profile config "multinode-061470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:31:01.605363   54766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 00:31:01.605754   54766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:31:01.605804   54766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:31:01.621133   54766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44531
	I0429 00:31:01.621604   54766 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:31:01.622227   54766 main.go:141] libmachine: Using API Version  1
	I0429 00:31:01.622260   54766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:31:01.622602   54766 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:31:01.622770   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:31:01.660609   54766 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 00:31:01.661924   54766 start.go:297] selected driver: kvm2
	I0429 00:31:01.661942   54766 start.go:901] validating driver "kvm2" against &{Name:multinode-061470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-061470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.153 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:31:01.662120   54766 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 00:31:01.662434   54766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:31:01.662530   54766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 00:31:01.677573   54766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 00:31:01.678256   54766 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 00:31:01.678328   54766 cni.go:84] Creating CNI manager for ""
	I0429 00:31:01.678340   54766 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 00:31:01.678399   54766 start.go:340] cluster config:
	{Name:multinode-061470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-061470 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.153 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:31:01.678534   54766 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:31:01.680326   54766 out.go:177] * Starting "multinode-061470" primary control-plane node in "multinode-061470" cluster
	I0429 00:31:01.681663   54766 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:31:01.681702   54766 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 00:31:01.681709   54766 cache.go:56] Caching tarball of preloaded images
	I0429 00:31:01.681785   54766 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 00:31:01.681796   54766 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 00:31:01.681915   54766 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/config.json ...
	I0429 00:31:01.682168   54766 start.go:360] acquireMachinesLock for multinode-061470: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 00:31:01.682214   54766 start.go:364] duration metric: took 26.247µs to acquireMachinesLock for "multinode-061470"
	I0429 00:31:01.682228   54766 start.go:96] Skipping create...Using existing machine configuration
	I0429 00:31:01.682235   54766 fix.go:54] fixHost starting: 
	I0429 00:31:01.682491   54766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:31:01.682521   54766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:31:01.697486   54766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41903
	I0429 00:31:01.698093   54766 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:31:01.698595   54766 main.go:141] libmachine: Using API Version  1
	I0429 00:31:01.698614   54766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:31:01.698904   54766 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:31:01.699089   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:31:01.699229   54766 main.go:141] libmachine: (multinode-061470) Calling .GetState
	I0429 00:31:01.700860   54766 fix.go:112] recreateIfNeeded on multinode-061470: state=Running err=<nil>
	W0429 00:31:01.700883   54766 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 00:31:01.703899   54766 out.go:177] * Updating the running kvm2 "multinode-061470" VM ...
	I0429 00:31:01.705108   54766 machine.go:94] provisionDockerMachine start ...
	I0429 00:31:01.705127   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:31:01.705320   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:01.707794   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.708271   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:01.708304   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.708487   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:31:01.708635   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.708788   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.708938   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:31:01.709064   54766 main.go:141] libmachine: Using SSH client type: native
	I0429 00:31:01.709250   54766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0429 00:31:01.709265   54766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 00:31:01.827933   54766 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-061470
	
	I0429 00:31:01.827965   54766 main.go:141] libmachine: (multinode-061470) Calling .GetMachineName
	I0429 00:31:01.828240   54766 buildroot.go:166] provisioning hostname "multinode-061470"
	I0429 00:31:01.828275   54766 main.go:141] libmachine: (multinode-061470) Calling .GetMachineName
	I0429 00:31:01.828475   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:01.831103   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.831527   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:01.831554   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.831678   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:31:01.831880   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.832035   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.832177   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:31:01.832358   54766 main.go:141] libmachine: Using SSH client type: native
	I0429 00:31:01.832506   54766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0429 00:31:01.832517   54766 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-061470 && echo "multinode-061470" | sudo tee /etc/hostname
	I0429 00:31:01.961375   54766 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-061470
	
	I0429 00:31:01.961404   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:01.964020   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.964344   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:01.964381   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.964513   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:31:01.964744   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.964922   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.965088   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:31:01.965263   54766 main.go:141] libmachine: Using SSH client type: native
	I0429 00:31:01.965476   54766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0429 00:31:01.965500   54766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-061470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-061470/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-061470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 00:31:02.075165   54766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:31:02.075190   54766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0429 00:31:02.075216   54766 buildroot.go:174] setting up certificates
	I0429 00:31:02.075226   54766 provision.go:84] configureAuth start
	I0429 00:31:02.075238   54766 main.go:141] libmachine: (multinode-061470) Calling .GetMachineName
	I0429 00:31:02.075506   54766 main.go:141] libmachine: (multinode-061470) Calling .GetIP
	I0429 00:31:02.078155   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.078539   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:02.078563   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.078696   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:02.080959   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.081287   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:02.081322   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.081405   54766 provision.go:143] copyHostCerts
	I0429 00:31:02.081433   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:31:02.081464   54766 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0429 00:31:02.081473   54766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:31:02.081553   54766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0429 00:31:02.081651   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:31:02.081679   54766 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0429 00:31:02.081689   54766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:31:02.081733   54766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0429 00:31:02.081789   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:31:02.081812   54766 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0429 00:31:02.081821   54766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:31:02.081850   54766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0429 00:31:02.081910   54766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.multinode-061470 san=[127.0.0.1 192.168.39.59 localhost minikube multinode-061470]
	I0429 00:31:02.258265   54766 provision.go:177] copyRemoteCerts
	I0429 00:31:02.258319   54766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 00:31:02.258341   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:02.260787   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.261136   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:02.261159   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.261349   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:31:02.261533   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:02.261688   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:31:02.261823   54766 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/multinode-061470/id_rsa Username:docker}
	I0429 00:31:02.351320   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 00:31:02.351408   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 00:31:02.379432   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 00:31:02.379507   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 00:31:02.420034   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 00:31:02.420109   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 00:31:02.449866   54766 provision.go:87] duration metric: took 374.62986ms to configureAuth
	I0429 00:31:02.449891   54766 buildroot.go:189] setting minikube options for container-runtime
	I0429 00:31:02.450122   54766 config.go:182] Loaded profile config "multinode-061470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:31:02.450199   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:02.452768   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.453100   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:02.453127   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.453334   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:31:02.453536   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:02.453693   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:02.453839   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:31:02.453997   54766 main.go:141] libmachine: Using SSH client type: native
	I0429 00:31:02.454171   54766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0429 00:31:02.454186   54766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 00:32:33.174713   54766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 00:32:33.174748   54766 machine.go:97] duration metric: took 1m31.469628491s to provisionDockerMachine
	I0429 00:32:33.174762   54766 start.go:293] postStartSetup for "multinode-061470" (driver="kvm2")
	I0429 00:32:33.174779   54766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 00:32:33.174801   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:32:33.175145   54766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 00:32:33.175172   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:32:33.178406   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.178829   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:33.178857   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.178996   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:32:33.179168   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:32:33.179338   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:32:33.179493   54766 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/multinode-061470/id_rsa Username:docker}
	I0429 00:32:33.266766   54766 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 00:32:33.271919   54766 command_runner.go:130] > NAME=Buildroot
	I0429 00:32:33.271943   54766 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 00:32:33.271949   54766 command_runner.go:130] > ID=buildroot
	I0429 00:32:33.271956   54766 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 00:32:33.271961   54766 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 00:32:33.272010   54766 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 00:32:33.272025   54766 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0429 00:32:33.272081   54766 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0429 00:32:33.272147   54766 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0429 00:32:33.272156   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /etc/ssl/certs/207272.pem
	I0429 00:32:33.272242   54766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 00:32:33.282854   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:32:33.312519   54766 start.go:296] duration metric: took 137.742592ms for postStartSetup
	I0429 00:32:33.312555   54766 fix.go:56] duration metric: took 1m31.630319518s for fixHost
	I0429 00:32:33.312574   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:32:33.315078   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.315424   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:33.315455   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.315622   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:32:33.315815   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:32:33.315973   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:32:33.316117   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:32:33.316336   54766 main.go:141] libmachine: Using SSH client type: native
	I0429 00:32:33.316488   54766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0429 00:32:33.316498   54766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 00:32:33.427591   54766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714350753.398655167
	
	I0429 00:32:33.427613   54766 fix.go:216] guest clock: 1714350753.398655167
	I0429 00:32:33.427632   54766 fix.go:229] Guest: 2024-04-29 00:32:33.398655167 +0000 UTC Remote: 2024-04-29 00:32:33.312559236 +0000 UTC m=+91.769784437 (delta=86.095931ms)
	I0429 00:32:33.427650   54766 fix.go:200] guest clock delta is within tolerance: 86.095931ms
	I0429 00:32:33.427656   54766 start.go:83] releasing machines lock for "multinode-061470", held for 1m31.745433671s
	I0429 00:32:33.427674   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:32:33.427920   54766 main.go:141] libmachine: (multinode-061470) Calling .GetIP
	I0429 00:32:33.430595   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.430941   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:33.430963   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.431149   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:32:33.431616   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:32:33.431781   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:32:33.431869   54766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 00:32:33.431900   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:32:33.432009   54766 ssh_runner.go:195] Run: cat /version.json
	I0429 00:32:33.432034   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:32:33.434424   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.434708   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:33.434739   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.434759   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.434868   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:32:33.435048   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:32:33.435203   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:32:33.435272   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:33.435299   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.435346   54766 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/multinode-061470/id_rsa Username:docker}
	I0429 00:32:33.435482   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:32:33.435634   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:32:33.435788   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:32:33.435936   54766 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/multinode-061470/id_rsa Username:docker}
	I0429 00:32:33.536885   54766 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 00:32:33.536945   54766 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 00:32:33.537082   54766 ssh_runner.go:195] Run: systemctl --version
	I0429 00:32:33.543779   54766 command_runner.go:130] > systemd 252 (252)
	I0429 00:32:33.543826   54766 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 00:32:33.543888   54766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 00:32:33.707703   54766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 00:32:33.727207   54766 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 00:32:33.727659   54766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 00:32:33.727734   54766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 00:32:33.737965   54766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 00:32:33.737996   54766 start.go:494] detecting cgroup driver to use...
	I0429 00:32:33.738079   54766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 00:32:33.756027   54766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 00:32:33.771637   54766 docker.go:217] disabling cri-docker service (if available) ...
	I0429 00:32:33.771696   54766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 00:32:33.786521   54766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 00:32:33.800539   54766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 00:32:33.951688   54766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 00:32:34.104465   54766 docker.go:233] disabling docker service ...
	I0429 00:32:34.104541   54766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 00:32:34.122794   54766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 00:32:34.137524   54766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 00:32:34.283875   54766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 00:32:34.431476   54766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 00:32:34.447054   54766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 00:32:34.472752   54766 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0429 00:32:34.473139   54766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 00:32:34.473194   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.484939   54766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 00:32:34.485017   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.496722   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.508587   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.521346   54766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 00:32:34.534344   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.546332   54766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.559330   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.571857   54766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 00:32:34.581768   54766 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 00:32:34.581929   54766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 00:32:34.591744   54766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:32:34.738618   54766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 00:32:34.998678   54766 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 00:32:34.998757   54766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 00:32:35.004613   54766 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0429 00:32:35.004640   54766 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 00:32:35.004650   54766 command_runner.go:130] > Device: 0,22	Inode: 1336        Links: 1
	I0429 00:32:35.004661   54766 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 00:32:35.004669   54766 command_runner.go:130] > Access: 2024-04-29 00:32:34.856129831 +0000
	I0429 00:32:35.004678   54766 command_runner.go:130] > Modify: 2024-04-29 00:32:34.856129831 +0000
	I0429 00:32:35.004686   54766 command_runner.go:130] > Change: 2024-04-29 00:32:34.856129831 +0000
	I0429 00:32:35.004706   54766 command_runner.go:130] >  Birth: -
	I0429 00:32:35.004725   54766 start.go:562] Will wait 60s for crictl version
	I0429 00:32:35.004767   54766 ssh_runner.go:195] Run: which crictl
	I0429 00:32:35.008932   54766 command_runner.go:130] > /usr/bin/crictl
	I0429 00:32:35.009170   54766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 00:32:35.048789   54766 command_runner.go:130] > Version:  0.1.0
	I0429 00:32:35.048812   54766 command_runner.go:130] > RuntimeName:  cri-o
	I0429 00:32:35.048817   54766 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0429 00:32:35.048821   54766 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 00:32:35.049990   54766 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 00:32:35.050494   54766 ssh_runner.go:195] Run: crio --version
	I0429 00:32:35.088087   54766 command_runner.go:130] > crio version 1.29.1
	I0429 00:32:35.088107   54766 command_runner.go:130] > Version:        1.29.1
	I0429 00:32:35.088113   54766 command_runner.go:130] > GitCommit:      unknown
	I0429 00:32:35.088117   54766 command_runner.go:130] > GitCommitDate:  unknown
	I0429 00:32:35.088121   54766 command_runner.go:130] > GitTreeState:   clean
	I0429 00:32:35.088128   54766 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 00:32:35.088133   54766 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 00:32:35.088136   54766 command_runner.go:130] > Compiler:       gc
	I0429 00:32:35.088141   54766 command_runner.go:130] > Platform:       linux/amd64
	I0429 00:32:35.088145   54766 command_runner.go:130] > Linkmode:       dynamic
	I0429 00:32:35.088149   54766 command_runner.go:130] > BuildTags:      
	I0429 00:32:35.088153   54766 command_runner.go:130] >   containers_image_ostree_stub
	I0429 00:32:35.088158   54766 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 00:32:35.088161   54766 command_runner.go:130] >   btrfs_noversion
	I0429 00:32:35.088166   54766 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 00:32:35.088170   54766 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 00:32:35.088174   54766 command_runner.go:130] >   seccomp
	I0429 00:32:35.088179   54766 command_runner.go:130] > LDFlags:          unknown
	I0429 00:32:35.088183   54766 command_runner.go:130] > SeccompEnabled:   true
	I0429 00:32:35.088187   54766 command_runner.go:130] > AppArmorEnabled:  false
	I0429 00:32:35.089653   54766 ssh_runner.go:195] Run: crio --version
	I0429 00:32:35.129757   54766 command_runner.go:130] > crio version 1.29.1
	I0429 00:32:35.129780   54766 command_runner.go:130] > Version:        1.29.1
	I0429 00:32:35.129790   54766 command_runner.go:130] > GitCommit:      unknown
	I0429 00:32:35.129828   54766 command_runner.go:130] > GitCommitDate:  unknown
	I0429 00:32:35.129843   54766 command_runner.go:130] > GitTreeState:   clean
	I0429 00:32:35.129854   54766 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 00:32:35.129861   54766 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 00:32:35.129865   54766 command_runner.go:130] > Compiler:       gc
	I0429 00:32:35.129872   54766 command_runner.go:130] > Platform:       linux/amd64
	I0429 00:32:35.129876   54766 command_runner.go:130] > Linkmode:       dynamic
	I0429 00:32:35.129883   54766 command_runner.go:130] > BuildTags:      
	I0429 00:32:35.129887   54766 command_runner.go:130] >   containers_image_ostree_stub
	I0429 00:32:35.129892   54766 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 00:32:35.129895   54766 command_runner.go:130] >   btrfs_noversion
	I0429 00:32:35.129903   54766 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 00:32:35.129909   54766 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 00:32:35.129916   54766 command_runner.go:130] >   seccomp
	I0429 00:32:35.129923   54766 command_runner.go:130] > LDFlags:          unknown
	I0429 00:32:35.129931   54766 command_runner.go:130] > SeccompEnabled:   true
	I0429 00:32:35.129939   54766 command_runner.go:130] > AppArmorEnabled:  false
	I0429 00:32:35.132181   54766 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 00:32:35.133751   54766 main.go:141] libmachine: (multinode-061470) Calling .GetIP
	I0429 00:32:35.136332   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:35.136701   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:35.136736   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:35.136923   54766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 00:32:35.141820   54766 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0429 00:32:35.142059   54766 kubeadm.go:877] updating cluster {Name:multinode-061470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-061470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.153 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 00:32:35.142174   54766 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:32:35.142214   54766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:32:35.191177   54766 command_runner.go:130] > {
	I0429 00:32:35.191200   54766 command_runner.go:130] >   "images": [
	I0429 00:32:35.191204   54766 command_runner.go:130] >     {
	I0429 00:32:35.191211   54766 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 00:32:35.191217   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191222   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 00:32:35.191226   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191230   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191240   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 00:32:35.191249   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 00:32:35.191259   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191264   54766 command_runner.go:130] >       "size": "65291810",
	I0429 00:32:35.191268   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.191272   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191278   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191282   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191286   54766 command_runner.go:130] >     },
	I0429 00:32:35.191290   54766 command_runner.go:130] >     {
	I0429 00:32:35.191298   54766 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 00:32:35.191303   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191311   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 00:32:35.191314   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191321   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191328   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 00:32:35.191337   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 00:32:35.191340   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191345   54766 command_runner.go:130] >       "size": "1363676",
	I0429 00:32:35.191348   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.191355   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191362   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191365   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191369   54766 command_runner.go:130] >     },
	I0429 00:32:35.191372   54766 command_runner.go:130] >     {
	I0429 00:32:35.191378   54766 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 00:32:35.191383   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191389   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 00:32:35.191395   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191399   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191408   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 00:32:35.191418   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 00:32:35.191421   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191426   54766 command_runner.go:130] >       "size": "31470524",
	I0429 00:32:35.191431   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.191443   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191450   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191454   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191460   54766 command_runner.go:130] >     },
	I0429 00:32:35.191463   54766 command_runner.go:130] >     {
	I0429 00:32:35.191469   54766 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 00:32:35.191475   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191480   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 00:32:35.191486   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191490   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191497   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 00:32:35.191511   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 00:32:35.191517   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191521   54766 command_runner.go:130] >       "size": "61245718",
	I0429 00:32:35.191525   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.191529   54766 command_runner.go:130] >       "username": "nonroot",
	I0429 00:32:35.191536   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191540   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191546   54766 command_runner.go:130] >     },
	I0429 00:32:35.191549   54766 command_runner.go:130] >     {
	I0429 00:32:35.191555   54766 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 00:32:35.191561   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191566   54766 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 00:32:35.191572   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191576   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191585   54766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 00:32:35.191594   54766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 00:32:35.191600   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191604   54766 command_runner.go:130] >       "size": "150779692",
	I0429 00:32:35.191610   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.191614   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.191620   54766 command_runner.go:130] >       },
	I0429 00:32:35.191624   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191630   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191634   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191640   54766 command_runner.go:130] >     },
	I0429 00:32:35.191648   54766 command_runner.go:130] >     {
	I0429 00:32:35.191657   54766 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 00:32:35.191663   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191668   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 00:32:35.191674   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191678   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191687   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 00:32:35.191697   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 00:32:35.191703   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191708   54766 command_runner.go:130] >       "size": "117609952",
	I0429 00:32:35.191714   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.191724   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.191730   54766 command_runner.go:130] >       },
	I0429 00:32:35.191734   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191738   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191744   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191748   54766 command_runner.go:130] >     },
	I0429 00:32:35.191754   54766 command_runner.go:130] >     {
	I0429 00:32:35.191760   54766 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 00:32:35.191766   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191771   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 00:32:35.191777   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191782   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191791   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 00:32:35.191800   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 00:32:35.191806   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191811   54766 command_runner.go:130] >       "size": "112170310",
	I0429 00:32:35.191817   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.191821   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.191826   54766 command_runner.go:130] >       },
	I0429 00:32:35.191830   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191836   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191840   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191846   54766 command_runner.go:130] >     },
	I0429 00:32:35.191849   54766 command_runner.go:130] >     {
	I0429 00:32:35.191858   54766 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 00:32:35.191867   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191874   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 00:32:35.191878   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191882   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191905   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 00:32:35.191916   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 00:32:35.191919   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191923   54766 command_runner.go:130] >       "size": "85932953",
	I0429 00:32:35.191926   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.191929   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191933   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191937   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191940   54766 command_runner.go:130] >     },
	I0429 00:32:35.191943   54766 command_runner.go:130] >     {
	I0429 00:32:35.191949   54766 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 00:32:35.191952   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191957   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 00:32:35.191960   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191964   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191971   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 00:32:35.191978   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 00:32:35.191982   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191988   54766 command_runner.go:130] >       "size": "63026502",
	I0429 00:32:35.191992   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.191999   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.192002   54766 command_runner.go:130] >       },
	I0429 00:32:35.192006   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.192010   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.192016   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.192020   54766 command_runner.go:130] >     },
	I0429 00:32:35.192026   54766 command_runner.go:130] >     {
	I0429 00:32:35.192031   54766 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 00:32:35.192038   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.192043   54766 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 00:32:35.192049   54766 command_runner.go:130] >       ],
	I0429 00:32:35.192053   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.192080   54766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 00:32:35.192093   54766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 00:32:35.192100   54766 command_runner.go:130] >       ],
	I0429 00:32:35.192107   54766 command_runner.go:130] >       "size": "750414",
	I0429 00:32:35.192111   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.192117   54766 command_runner.go:130] >         "value": "65535"
	I0429 00:32:35.192121   54766 command_runner.go:130] >       },
	I0429 00:32:35.192127   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.192132   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.192138   54766 command_runner.go:130] >       "pinned": true
	I0429 00:32:35.192141   54766 command_runner.go:130] >     }
	I0429 00:32:35.192147   54766 command_runner.go:130] >   ]
	I0429 00:32:35.192150   54766 command_runner.go:130] > }
	I0429 00:32:35.193008   54766 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:32:35.193022   54766 crio.go:433] Images already preloaded, skipping extraction
	I0429 00:32:35.193065   54766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:32:35.228254   54766 command_runner.go:130] > {
	I0429 00:32:35.228283   54766 command_runner.go:130] >   "images": [
	I0429 00:32:35.228289   54766 command_runner.go:130] >     {
	I0429 00:32:35.228301   54766 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 00:32:35.228310   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228318   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 00:32:35.228326   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228332   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228346   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 00:32:35.228362   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 00:32:35.228368   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228378   54766 command_runner.go:130] >       "size": "65291810",
	I0429 00:32:35.228384   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.228392   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228414   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228424   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228428   54766 command_runner.go:130] >     },
	I0429 00:32:35.228431   54766 command_runner.go:130] >     {
	I0429 00:32:35.228437   54766 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 00:32:35.228444   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228449   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 00:32:35.228452   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228456   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228463   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 00:32:35.228471   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 00:32:35.228475   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228478   54766 command_runner.go:130] >       "size": "1363676",
	I0429 00:32:35.228494   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.228501   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228505   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228509   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228512   54766 command_runner.go:130] >     },
	I0429 00:32:35.228516   54766 command_runner.go:130] >     {
	I0429 00:32:35.228525   54766 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 00:32:35.228529   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228534   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 00:32:35.228540   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228544   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228554   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 00:32:35.228564   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 00:32:35.228570   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228575   54766 command_runner.go:130] >       "size": "31470524",
	I0429 00:32:35.228579   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.228583   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228589   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228593   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228599   54766 command_runner.go:130] >     },
	I0429 00:32:35.228603   54766 command_runner.go:130] >     {
	I0429 00:32:35.228609   54766 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 00:32:35.228615   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228620   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 00:32:35.228626   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228630   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228638   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 00:32:35.228652   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 00:32:35.228656   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228661   54766 command_runner.go:130] >       "size": "61245718",
	I0429 00:32:35.228665   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.228672   54766 command_runner.go:130] >       "username": "nonroot",
	I0429 00:32:35.228679   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228685   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228688   54766 command_runner.go:130] >     },
	I0429 00:32:35.228692   54766 command_runner.go:130] >     {
	I0429 00:32:35.228699   54766 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 00:32:35.228707   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228714   54766 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 00:32:35.228736   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228747   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228758   54766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 00:32:35.228772   54766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 00:32:35.228778   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228782   54766 command_runner.go:130] >       "size": "150779692",
	I0429 00:32:35.228786   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.228790   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.228796   54766 command_runner.go:130] >       },
	I0429 00:32:35.228800   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228804   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228809   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228815   54766 command_runner.go:130] >     },
	I0429 00:32:35.228818   54766 command_runner.go:130] >     {
	I0429 00:32:35.228824   54766 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 00:32:35.228831   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228836   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 00:32:35.228842   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228846   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228855   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 00:32:35.228863   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 00:32:35.228869   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228873   54766 command_runner.go:130] >       "size": "117609952",
	I0429 00:32:35.228876   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.228880   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.228884   54766 command_runner.go:130] >       },
	I0429 00:32:35.228888   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228894   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228898   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228904   54766 command_runner.go:130] >     },
	I0429 00:32:35.228907   54766 command_runner.go:130] >     {
	I0429 00:32:35.228913   54766 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 00:32:35.228919   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228927   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 00:32:35.228932   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228936   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228944   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 00:32:35.228954   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 00:32:35.228960   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228967   54766 command_runner.go:130] >       "size": "112170310",
	I0429 00:32:35.228971   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.228974   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.228978   54766 command_runner.go:130] >       },
	I0429 00:32:35.228982   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228985   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228989   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228993   54766 command_runner.go:130] >     },
	I0429 00:32:35.228996   54766 command_runner.go:130] >     {
	I0429 00:32:35.229002   54766 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 00:32:35.229008   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.229013   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 00:32:35.229019   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229023   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.229039   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 00:32:35.229049   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 00:32:35.229052   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229059   54766 command_runner.go:130] >       "size": "85932953",
	I0429 00:32:35.229063   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.229069   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.229072   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.229076   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.229080   54766 command_runner.go:130] >     },
	I0429 00:32:35.229083   54766 command_runner.go:130] >     {
	I0429 00:32:35.229089   54766 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 00:32:35.229093   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.229098   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 00:32:35.229104   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229108   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.229118   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 00:32:35.229126   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 00:32:35.229132   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229136   54766 command_runner.go:130] >       "size": "63026502",
	I0429 00:32:35.229140   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.229143   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.229147   54766 command_runner.go:130] >       },
	I0429 00:32:35.229151   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.229157   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.229161   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.229165   54766 command_runner.go:130] >     },
	I0429 00:32:35.229168   54766 command_runner.go:130] >     {
	I0429 00:32:35.229177   54766 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 00:32:35.229181   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.229186   54766 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 00:32:35.229191   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229195   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.229202   54766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 00:32:35.229213   54766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 00:32:35.229216   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229220   54766 command_runner.go:130] >       "size": "750414",
	I0429 00:32:35.229229   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.229235   54766 command_runner.go:130] >         "value": "65535"
	I0429 00:32:35.229239   54766 command_runner.go:130] >       },
	I0429 00:32:35.229249   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.229256   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.229265   54766 command_runner.go:130] >       "pinned": true
	I0429 00:32:35.229270   54766 command_runner.go:130] >     }
	I0429 00:32:35.229278   54766 command_runner.go:130] >   ]
	I0429 00:32:35.229283   54766 command_runner.go:130] > }
	I0429 00:32:35.229810   54766 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:32:35.229827   54766 cache_images.go:84] Images are preloaded, skipping loading
	I0429 00:32:35.229835   54766 kubeadm.go:928] updating node { 192.168.39.59 8443 v1.30.0 crio true true} ...
	I0429 00:32:35.229939   54766 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-061470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-061470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 00:32:35.229997   54766 ssh_runner.go:195] Run: crio config
	I0429 00:32:35.276280   54766 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0429 00:32:35.276308   54766 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0429 00:32:35.276319   54766 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0429 00:32:35.276326   54766 command_runner.go:130] > #
	I0429 00:32:35.276335   54766 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0429 00:32:35.276345   54766 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0429 00:32:35.276358   54766 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0429 00:32:35.276385   54766 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0429 00:32:35.276397   54766 command_runner.go:130] > # reload'.
	I0429 00:32:35.276407   54766 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0429 00:32:35.276418   54766 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0429 00:32:35.276431   54766 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0429 00:32:35.276440   54766 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0429 00:32:35.276452   54766 command_runner.go:130] > [crio]
	I0429 00:32:35.276463   54766 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0429 00:32:35.276474   54766 command_runner.go:130] > # containers images, in this directory.
	I0429 00:32:35.276482   54766 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0429 00:32:35.276508   54766 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0429 00:32:35.276522   54766 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0429 00:32:35.276534   54766 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0429 00:32:35.276542   54766 command_runner.go:130] > # imagestore = ""
	I0429 00:32:35.276557   54766 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0429 00:32:35.276571   54766 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0429 00:32:35.276581   54766 command_runner.go:130] > storage_driver = "overlay"
	I0429 00:32:35.276591   54766 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0429 00:32:35.276604   54766 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0429 00:32:35.276614   54766 command_runner.go:130] > storage_option = [
	I0429 00:32:35.276622   54766 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0429 00:32:35.276630   54766 command_runner.go:130] > ]
	I0429 00:32:35.276642   54766 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0429 00:32:35.276656   54766 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0429 00:32:35.276667   54766 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0429 00:32:35.276680   54766 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0429 00:32:35.276693   54766 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0429 00:32:35.276700   54766 command_runner.go:130] > # always happen on a node reboot
	I0429 00:32:35.276712   54766 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0429 00:32:35.276729   54766 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0429 00:32:35.276743   54766 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0429 00:32:35.276755   54766 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0429 00:32:35.276767   54766 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0429 00:32:35.276782   54766 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0429 00:32:35.276799   54766 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0429 00:32:35.276808   54766 command_runner.go:130] > # internal_wipe = true
	I0429 00:32:35.276822   54766 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0429 00:32:35.276834   54766 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0429 00:32:35.276845   54766 command_runner.go:130] > # internal_repair = false
	I0429 00:32:35.276857   54766 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0429 00:32:35.276870   54766 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0429 00:32:35.276883   54766 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0429 00:32:35.276899   54766 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0429 00:32:35.276913   54766 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0429 00:32:35.276921   54766 command_runner.go:130] > [crio.api]
	I0429 00:32:35.276933   54766 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0429 00:32:35.276945   54766 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0429 00:32:35.276961   54766 command_runner.go:130] > # IP address on which the stream server will listen.
	I0429 00:32:35.276972   54766 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0429 00:32:35.276986   54766 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0429 00:32:35.276995   54766 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0429 00:32:35.277005   54766 command_runner.go:130] > # stream_port = "0"
	I0429 00:32:35.277016   54766 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0429 00:32:35.277027   54766 command_runner.go:130] > # stream_enable_tls = false
	I0429 00:32:35.277040   54766 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0429 00:32:35.277050   54766 command_runner.go:130] > # stream_idle_timeout = ""
	I0429 00:32:35.277064   54766 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0429 00:32:35.277078   54766 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0429 00:32:35.277087   54766 command_runner.go:130] > # minutes.
	I0429 00:32:35.277094   54766 command_runner.go:130] > # stream_tls_cert = ""
	I0429 00:32:35.277105   54766 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0429 00:32:35.277118   54766 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0429 00:32:35.277128   54766 command_runner.go:130] > # stream_tls_key = ""
	I0429 00:32:35.277139   54766 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0429 00:32:35.277153   54766 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0429 00:32:35.277172   54766 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0429 00:32:35.277183   54766 command_runner.go:130] > # stream_tls_ca = ""
	I0429 00:32:35.277199   54766 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 00:32:35.277211   54766 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0429 00:32:35.277226   54766 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 00:32:35.277240   54766 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0429 00:32:35.277253   54766 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0429 00:32:35.277266   54766 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0429 00:32:35.277275   54766 command_runner.go:130] > [crio.runtime]
	I0429 00:32:35.277286   54766 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0429 00:32:35.277298   54766 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0429 00:32:35.277308   54766 command_runner.go:130] > # "nofile=1024:2048"
	I0429 00:32:35.277319   54766 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0429 00:32:35.277331   54766 command_runner.go:130] > # default_ulimits = [
	I0429 00:32:35.277336   54766 command_runner.go:130] > # ]
	I0429 00:32:35.277347   54766 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0429 00:32:35.277357   54766 command_runner.go:130] > # no_pivot = false
	I0429 00:32:35.277368   54766 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0429 00:32:35.277381   54766 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0429 00:32:35.277393   54766 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0429 00:32:35.277407   54766 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0429 00:32:35.277419   54766 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0429 00:32:35.277433   54766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 00:32:35.277443   54766 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0429 00:32:35.277450   54766 command_runner.go:130] > # Cgroup setting for conmon
	I0429 00:32:35.277465   54766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0429 00:32:35.277475   54766 command_runner.go:130] > conmon_cgroup = "pod"
	I0429 00:32:35.277487   54766 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0429 00:32:35.277499   54766 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0429 00:32:35.277513   54766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 00:32:35.277522   54766 command_runner.go:130] > conmon_env = [
	I0429 00:32:35.277535   54766 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 00:32:35.277543   54766 command_runner.go:130] > ]
	I0429 00:32:35.277552   54766 command_runner.go:130] > # Additional environment variables to set for all the
	I0429 00:32:35.277564   54766 command_runner.go:130] > # containers. These are overridden if set in the
	I0429 00:32:35.277577   54766 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0429 00:32:35.277587   54766 command_runner.go:130] > # default_env = [
	I0429 00:32:35.277596   54766 command_runner.go:130] > # ]
	I0429 00:32:35.277606   54766 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0429 00:32:35.277622   54766 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0429 00:32:35.277631   54766 command_runner.go:130] > # selinux = false
	I0429 00:32:35.277642   54766 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0429 00:32:35.277656   54766 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0429 00:32:35.277669   54766 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0429 00:32:35.277679   54766 command_runner.go:130] > # seccomp_profile = ""
	I0429 00:32:35.277690   54766 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0429 00:32:35.277703   54766 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0429 00:32:35.277716   54766 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0429 00:32:35.277726   54766 command_runner.go:130] > # which might increase security.
	I0429 00:32:35.277740   54766 command_runner.go:130] > # This option is currently deprecated,
	I0429 00:32:35.277749   54766 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0429 00:32:35.277760   54766 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0429 00:32:35.277775   54766 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0429 00:32:35.277788   54766 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0429 00:32:35.277801   54766 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0429 00:32:35.277813   54766 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0429 00:32:35.277824   54766 command_runner.go:130] > # This option supports live configuration reload.
	I0429 00:32:35.277837   54766 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0429 00:32:35.277850   54766 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0429 00:32:35.277861   54766 command_runner.go:130] > # the cgroup blockio controller.
	I0429 00:32:35.277871   54766 command_runner.go:130] > # blockio_config_file = ""
	I0429 00:32:35.277884   54766 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0429 00:32:35.277894   54766 command_runner.go:130] > # blockio parameters.
	I0429 00:32:35.277901   54766 command_runner.go:130] > # blockio_reload = false
	I0429 00:32:35.277915   54766 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0429 00:32:35.277925   54766 command_runner.go:130] > # irqbalance daemon.
	I0429 00:32:35.277937   54766 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0429 00:32:35.277950   54766 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0429 00:32:35.277965   54766 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0429 00:32:35.277979   54766 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0429 00:32:35.277991   54766 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0429 00:32:35.278003   54766 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0429 00:32:35.278015   54766 command_runner.go:130] > # This option supports live configuration reload.
	I0429 00:32:35.278034   54766 command_runner.go:130] > # rdt_config_file = ""
	I0429 00:32:35.278044   54766 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0429 00:32:35.278055   54766 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0429 00:32:35.278079   54766 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0429 00:32:35.278089   54766 command_runner.go:130] > # separate_pull_cgroup = ""
	I0429 00:32:35.278102   54766 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0429 00:32:35.278115   54766 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0429 00:32:35.278125   54766 command_runner.go:130] > # will be added.
	I0429 00:32:35.278134   54766 command_runner.go:130] > # default_capabilities = [
	I0429 00:32:35.278142   54766 command_runner.go:130] > # 	"CHOWN",
	I0429 00:32:35.278148   54766 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0429 00:32:35.278158   54766 command_runner.go:130] > # 	"FSETID",
	I0429 00:32:35.278166   54766 command_runner.go:130] > # 	"FOWNER",
	I0429 00:32:35.278175   54766 command_runner.go:130] > # 	"SETGID",
	I0429 00:32:35.278182   54766 command_runner.go:130] > # 	"SETUID",
	I0429 00:32:35.278191   54766 command_runner.go:130] > # 	"SETPCAP",
	I0429 00:32:35.278198   54766 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0429 00:32:35.278207   54766 command_runner.go:130] > # 	"KILL",
	I0429 00:32:35.278214   54766 command_runner.go:130] > # ]
	I0429 00:32:35.278235   54766 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0429 00:32:35.278249   54766 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0429 00:32:35.278259   54766 command_runner.go:130] > # add_inheritable_capabilities = false
	I0429 00:32:35.278271   54766 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0429 00:32:35.278284   54766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 00:32:35.278293   54766 command_runner.go:130] > default_sysctls = [
	I0429 00:32:35.278306   54766 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0429 00:32:35.278314   54766 command_runner.go:130] > ]
	I0429 00:32:35.278323   54766 command_runner.go:130] > # List of devices on the host that a
	I0429 00:32:35.278337   54766 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0429 00:32:35.278347   54766 command_runner.go:130] > # allowed_devices = [
	I0429 00:32:35.278354   54766 command_runner.go:130] > # 	"/dev/fuse",
	I0429 00:32:35.278361   54766 command_runner.go:130] > # ]
	I0429 00:32:35.278370   54766 command_runner.go:130] > # List of additional devices. specified as
	I0429 00:32:35.278386   54766 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0429 00:32:35.278397   54766 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0429 00:32:35.278407   54766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 00:32:35.278417   54766 command_runner.go:130] > # additional_devices = [
	I0429 00:32:35.278425   54766 command_runner.go:130] > # ]
	I0429 00:32:35.278437   54766 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0429 00:32:35.278444   54766 command_runner.go:130] > # cdi_spec_dirs = [
	I0429 00:32:35.278451   54766 command_runner.go:130] > # 	"/etc/cdi",
	I0429 00:32:35.278460   54766 command_runner.go:130] > # 	"/var/run/cdi",
	I0429 00:32:35.278466   54766 command_runner.go:130] > # ]
	I0429 00:32:35.278480   54766 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0429 00:32:35.278493   54766 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0429 00:32:35.278502   54766 command_runner.go:130] > # Defaults to false.
	I0429 00:32:35.278513   54766 command_runner.go:130] > # device_ownership_from_security_context = false
	I0429 00:32:35.278525   54766 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0429 00:32:35.278539   54766 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0429 00:32:35.278548   54766 command_runner.go:130] > # hooks_dir = [
	I0429 00:32:35.278559   54766 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0429 00:32:35.278568   54766 command_runner.go:130] > # ]
	I0429 00:32:35.278579   54766 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0429 00:32:35.278592   54766 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0429 00:32:35.278605   54766 command_runner.go:130] > # its default mounts from the following two files:
	I0429 00:32:35.278609   54766 command_runner.go:130] > #
	I0429 00:32:35.278619   54766 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0429 00:32:35.278629   54766 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0429 00:32:35.278641   54766 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0429 00:32:35.278647   54766 command_runner.go:130] > #
	I0429 00:32:35.278659   54766 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0429 00:32:35.278673   54766 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0429 00:32:35.278686   54766 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0429 00:32:35.278696   54766 command_runner.go:130] > #      only add mounts it finds in this file.
	I0429 00:32:35.278702   54766 command_runner.go:130] > #
	I0429 00:32:35.278710   54766 command_runner.go:130] > # default_mounts_file = ""
	I0429 00:32:35.278723   54766 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0429 00:32:35.278740   54766 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0429 00:32:35.278749   54766 command_runner.go:130] > pids_limit = 1024
	I0429 00:32:35.278757   54766 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0429 00:32:35.278765   54766 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0429 00:32:35.278772   54766 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0429 00:32:35.278781   54766 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0429 00:32:35.278785   54766 command_runner.go:130] > # log_size_max = -1
	I0429 00:32:35.278791   54766 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0429 00:32:35.278796   54766 command_runner.go:130] > # log_to_journald = false
	I0429 00:32:35.278802   54766 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0429 00:32:35.278809   54766 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0429 00:32:35.278814   54766 command_runner.go:130] > # Path to directory for container attach sockets.
	I0429 00:32:35.278821   54766 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0429 00:32:35.278826   54766 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0429 00:32:35.278832   54766 command_runner.go:130] > # bind_mount_prefix = ""
	I0429 00:32:35.278837   54766 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0429 00:32:35.278842   54766 command_runner.go:130] > # read_only = false
	I0429 00:32:35.278849   54766 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0429 00:32:35.278862   54766 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0429 00:32:35.278871   54766 command_runner.go:130] > # live configuration reload.
	I0429 00:32:35.278877   54766 command_runner.go:130] > # log_level = "info"
	I0429 00:32:35.278888   54766 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0429 00:32:35.278899   54766 command_runner.go:130] > # This option supports live configuration reload.
	I0429 00:32:35.278909   54766 command_runner.go:130] > # log_filter = ""
	I0429 00:32:35.278919   54766 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0429 00:32:35.278932   54766 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0429 00:32:35.278942   54766 command_runner.go:130] > # separated by comma.
	I0429 00:32:35.278955   54766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 00:32:35.278964   54766 command_runner.go:130] > # uid_mappings = ""
	I0429 00:32:35.278973   54766 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0429 00:32:35.278982   54766 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0429 00:32:35.278986   54766 command_runner.go:130] > # separated by comma.
	I0429 00:32:35.278993   54766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 00:32:35.279000   54766 command_runner.go:130] > # gid_mappings = ""
	I0429 00:32:35.279006   54766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0429 00:32:35.279015   54766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 00:32:35.279026   54766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 00:32:35.279036   54766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 00:32:35.279042   54766 command_runner.go:130] > # minimum_mappable_uid = -1
	I0429 00:32:35.279048   54766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0429 00:32:35.279056   54766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 00:32:35.279064   54766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 00:32:35.279072   54766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 00:32:35.279078   54766 command_runner.go:130] > # minimum_mappable_gid = -1
	I0429 00:32:35.279084   54766 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0429 00:32:35.279092   54766 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0429 00:32:35.279100   54766 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0429 00:32:35.279104   54766 command_runner.go:130] > # ctr_stop_timeout = 30
	I0429 00:32:35.279111   54766 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0429 00:32:35.279120   54766 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0429 00:32:35.279127   54766 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0429 00:32:35.279132   54766 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0429 00:32:35.279138   54766 command_runner.go:130] > drop_infra_ctr = false
	I0429 00:32:35.279145   54766 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0429 00:32:35.279153   54766 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0429 00:32:35.279162   54766 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0429 00:32:35.279169   54766 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0429 00:32:35.279176   54766 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0429 00:32:35.279184   54766 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0429 00:32:35.279190   54766 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0429 00:32:35.279197   54766 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0429 00:32:35.279202   54766 command_runner.go:130] > # shared_cpuset = ""
	I0429 00:32:35.279209   54766 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0429 00:32:35.279216   54766 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0429 00:32:35.279220   54766 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0429 00:32:35.279232   54766 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0429 00:32:35.279239   54766 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0429 00:32:35.279244   54766 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0429 00:32:35.279252   54766 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0429 00:32:35.279257   54766 command_runner.go:130] > # enable_criu_support = false
	I0429 00:32:35.279264   54766 command_runner.go:130] > # Enable/disable the generation of the container,
	I0429 00:32:35.279272   54766 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0429 00:32:35.279279   54766 command_runner.go:130] > # enable_pod_events = false
	I0429 00:32:35.279285   54766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 00:32:35.279293   54766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 00:32:35.279300   54766 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0429 00:32:35.279304   54766 command_runner.go:130] > # default_runtime = "runc"
	I0429 00:32:35.279309   54766 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0429 00:32:35.279318   54766 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0429 00:32:35.279330   54766 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0429 00:32:35.279337   54766 command_runner.go:130] > # creation as a file is not desired either.
	I0429 00:32:35.279345   54766 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0429 00:32:35.279352   54766 command_runner.go:130] > # the hostname is being managed dynamically.
	I0429 00:32:35.279357   54766 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0429 00:32:35.279362   54766 command_runner.go:130] > # ]
	I0429 00:32:35.279368   54766 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0429 00:32:35.279377   54766 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0429 00:32:35.279385   54766 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0429 00:32:35.279392   54766 command_runner.go:130] > # Each entry in the table should follow the format:
	I0429 00:32:35.279400   54766 command_runner.go:130] > #
	I0429 00:32:35.279406   54766 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0429 00:32:35.279411   54766 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0429 00:32:35.279449   54766 command_runner.go:130] > # runtime_type = "oci"
	I0429 00:32:35.279456   54766 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0429 00:32:35.279461   54766 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0429 00:32:35.279467   54766 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0429 00:32:35.279472   54766 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0429 00:32:35.279478   54766 command_runner.go:130] > # monitor_env = []
	I0429 00:32:35.279483   54766 command_runner.go:130] > # privileged_without_host_devices = false
	I0429 00:32:35.279489   54766 command_runner.go:130] > # allowed_annotations = []
	I0429 00:32:35.279494   54766 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0429 00:32:35.279500   54766 command_runner.go:130] > # Where:
	I0429 00:32:35.279505   54766 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0429 00:32:35.279513   54766 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0429 00:32:35.279521   54766 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0429 00:32:35.279529   54766 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0429 00:32:35.279533   54766 command_runner.go:130] > #   in $PATH.
	I0429 00:32:35.279539   54766 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0429 00:32:35.279546   54766 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0429 00:32:35.279554   54766 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0429 00:32:35.279560   54766 command_runner.go:130] > #   state.
	I0429 00:32:35.279566   54766 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0429 00:32:35.279573   54766 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0429 00:32:35.279582   54766 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0429 00:32:35.279590   54766 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0429 00:32:35.279598   54766 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0429 00:32:35.279606   54766 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0429 00:32:35.279612   54766 command_runner.go:130] > #   The currently recognized values are:
	I0429 00:32:35.279618   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0429 00:32:35.279626   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0429 00:32:35.279634   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0429 00:32:35.279640   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0429 00:32:35.279650   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0429 00:32:35.279658   54766 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0429 00:32:35.279666   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0429 00:32:35.279676   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0429 00:32:35.279684   54766 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0429 00:32:35.279692   54766 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0429 00:32:35.279698   54766 command_runner.go:130] > #   deprecated option "conmon".
	I0429 00:32:35.279705   54766 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0429 00:32:35.279713   54766 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0429 00:32:35.279721   54766 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0429 00:32:35.279727   54766 command_runner.go:130] > #   should be moved to the container's cgroup
	I0429 00:32:35.279736   54766 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0429 00:32:35.279744   54766 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0429 00:32:35.279752   54766 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0429 00:32:35.279759   54766 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0429 00:32:35.279762   54766 command_runner.go:130] > #
	I0429 00:32:35.279767   54766 command_runner.go:130] > # Using the seccomp notifier feature:
	I0429 00:32:35.279772   54766 command_runner.go:130] > #
	I0429 00:32:35.279778   54766 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0429 00:32:35.279786   54766 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0429 00:32:35.279791   54766 command_runner.go:130] > #
	I0429 00:32:35.279799   54766 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0429 00:32:35.279808   54766 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0429 00:32:35.279810   54766 command_runner.go:130] > #
	I0429 00:32:35.279819   54766 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0429 00:32:35.279824   54766 command_runner.go:130] > # feature.
	I0429 00:32:35.279830   54766 command_runner.go:130] > #
	I0429 00:32:35.279836   54766 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0429 00:32:35.279844   54766 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0429 00:32:35.279850   54766 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0429 00:32:35.279858   54766 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0429 00:32:35.279864   54766 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0429 00:32:35.279869   54766 command_runner.go:130] > #
	I0429 00:32:35.279875   54766 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0429 00:32:35.279883   54766 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0429 00:32:35.279887   54766 command_runner.go:130] > #
	I0429 00:32:35.279893   54766 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0429 00:32:35.279901   54766 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0429 00:32:35.279906   54766 command_runner.go:130] > #
	I0429 00:32:35.279917   54766 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0429 00:32:35.279925   54766 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0429 00:32:35.279931   54766 command_runner.go:130] > # limitation.
	I0429 00:32:35.279935   54766 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0429 00:32:35.279942   54766 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0429 00:32:35.279946   54766 command_runner.go:130] > runtime_type = "oci"
	I0429 00:32:35.279951   54766 command_runner.go:130] > runtime_root = "/run/runc"
	I0429 00:32:35.279954   54766 command_runner.go:130] > runtime_config_path = ""
	I0429 00:32:35.279961   54766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0429 00:32:35.279965   54766 command_runner.go:130] > monitor_cgroup = "pod"
	I0429 00:32:35.279969   54766 command_runner.go:130] > monitor_exec_cgroup = ""
	I0429 00:32:35.279973   54766 command_runner.go:130] > monitor_env = [
	I0429 00:32:35.279979   54766 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 00:32:35.279984   54766 command_runner.go:130] > ]
	I0429 00:32:35.279989   54766 command_runner.go:130] > privileged_without_host_devices = false
	I0429 00:32:35.279997   54766 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0429 00:32:35.280004   54766 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0429 00:32:35.280010   54766 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0429 00:32:35.280019   54766 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0429 00:32:35.280028   54766 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0429 00:32:35.280036   54766 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0429 00:32:35.280047   54766 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0429 00:32:35.280057   54766 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0429 00:32:35.280064   54766 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0429 00:32:35.280071   54766 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0429 00:32:35.280076   54766 command_runner.go:130] > # Example:
	I0429 00:32:35.280081   54766 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0429 00:32:35.280088   54766 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0429 00:32:35.280093   54766 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0429 00:32:35.280100   54766 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0429 00:32:35.280104   54766 command_runner.go:130] > # cpuset = 0
	I0429 00:32:35.280108   54766 command_runner.go:130] > # cpushares = "0-1"
	I0429 00:32:35.280112   54766 command_runner.go:130] > # Where:
	I0429 00:32:35.280119   54766 command_runner.go:130] > # The workload name is workload-type.
	I0429 00:32:35.280127   54766 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0429 00:32:35.280134   54766 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0429 00:32:35.280143   54766 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0429 00:32:35.280154   54766 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0429 00:32:35.280159   54766 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0429 00:32:35.280166   54766 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0429 00:32:35.280173   54766 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0429 00:32:35.280179   54766 command_runner.go:130] > # Default value is set to true
	I0429 00:32:35.280183   54766 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0429 00:32:35.280191   54766 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0429 00:32:35.280197   54766 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0429 00:32:35.280202   54766 command_runner.go:130] > # Default value is set to 'false'
	I0429 00:32:35.280207   54766 command_runner.go:130] > # disable_hostport_mapping = false
	I0429 00:32:35.280213   54766 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0429 00:32:35.280219   54766 command_runner.go:130] > #
	I0429 00:32:35.280225   54766 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0429 00:32:35.280235   54766 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0429 00:32:35.280244   54766 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0429 00:32:35.280250   54766 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0429 00:32:35.280255   54766 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0429 00:32:35.280258   54766 command_runner.go:130] > [crio.image]
	I0429 00:32:35.280263   54766 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0429 00:32:35.280267   54766 command_runner.go:130] > # default_transport = "docker://"
	I0429 00:32:35.280274   54766 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0429 00:32:35.280280   54766 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0429 00:32:35.280284   54766 command_runner.go:130] > # global_auth_file = ""
	I0429 00:32:35.280288   54766 command_runner.go:130] > # The image used to instantiate infra containers.
	I0429 00:32:35.280293   54766 command_runner.go:130] > # This option supports live configuration reload.
	I0429 00:32:35.280297   54766 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0429 00:32:35.280303   54766 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0429 00:32:35.280308   54766 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0429 00:32:35.280313   54766 command_runner.go:130] > # This option supports live configuration reload.
	I0429 00:32:35.280316   54766 command_runner.go:130] > # pause_image_auth_file = ""
	I0429 00:32:35.280322   54766 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0429 00:32:35.280327   54766 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0429 00:32:35.280332   54766 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0429 00:32:35.280338   54766 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0429 00:32:35.280341   54766 command_runner.go:130] > # pause_command = "/pause"
	I0429 00:32:35.280350   54766 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0429 00:32:35.280356   54766 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0429 00:32:35.280361   54766 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0429 00:32:35.280367   54766 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0429 00:32:35.280372   54766 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0429 00:32:35.280377   54766 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0429 00:32:35.280381   54766 command_runner.go:130] > # pinned_images = [
	I0429 00:32:35.280384   54766 command_runner.go:130] > # ]
	I0429 00:32:35.280389   54766 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0429 00:32:35.280395   54766 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0429 00:32:35.280400   54766 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0429 00:32:35.280406   54766 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0429 00:32:35.280410   54766 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0429 00:32:35.280414   54766 command_runner.go:130] > # signature_policy = ""
	I0429 00:32:35.280421   54766 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0429 00:32:35.280430   54766 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0429 00:32:35.280437   54766 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0429 00:32:35.280445   54766 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0429 00:32:35.280450   54766 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0429 00:32:35.280456   54766 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0429 00:32:35.280464   54766 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0429 00:32:35.280472   54766 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0429 00:32:35.280478   54766 command_runner.go:130] > # changing them here.
	I0429 00:32:35.280482   54766 command_runner.go:130] > # insecure_registries = [
	I0429 00:32:35.280485   54766 command_runner.go:130] > # ]
	I0429 00:32:35.280494   54766 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0429 00:32:35.280498   54766 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0429 00:32:35.280504   54766 command_runner.go:130] > # image_volumes = "mkdir"
	I0429 00:32:35.280510   54766 command_runner.go:130] > # Temporary directory to use for storing big files
	I0429 00:32:35.280516   54766 command_runner.go:130] > # big_files_temporary_dir = ""
	I0429 00:32:35.280524   54766 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0429 00:32:35.280533   54766 command_runner.go:130] > # CNI plugins.
	I0429 00:32:35.280541   54766 command_runner.go:130] > [crio.network]
	I0429 00:32:35.280554   54766 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0429 00:32:35.280565   54766 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0429 00:32:35.280569   54766 command_runner.go:130] > # cni_default_network = ""
	I0429 00:32:35.280581   54766 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0429 00:32:35.280588   54766 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0429 00:32:35.280594   54766 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0429 00:32:35.280600   54766 command_runner.go:130] > # plugin_dirs = [
	I0429 00:32:35.280603   54766 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0429 00:32:35.280607   54766 command_runner.go:130] > # ]
	I0429 00:32:35.280614   54766 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0429 00:32:35.280621   54766 command_runner.go:130] > [crio.metrics]
	I0429 00:32:35.280626   54766 command_runner.go:130] > # Globally enable or disable metrics support.
	I0429 00:32:35.280632   54766 command_runner.go:130] > enable_metrics = true
	I0429 00:32:35.280636   54766 command_runner.go:130] > # Specify enabled metrics collectors.
	I0429 00:32:35.280643   54766 command_runner.go:130] > # Per default all metrics are enabled.
	I0429 00:32:35.280650   54766 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0429 00:32:35.280658   54766 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0429 00:32:35.280666   54766 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0429 00:32:35.280675   54766 command_runner.go:130] > # metrics_collectors = [
	I0429 00:32:35.280684   54766 command_runner.go:130] > # 	"operations",
	I0429 00:32:35.280695   54766 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0429 00:32:35.280707   54766 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0429 00:32:35.280716   54766 command_runner.go:130] > # 	"operations_errors",
	I0429 00:32:35.280723   54766 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0429 00:32:35.280733   54766 command_runner.go:130] > # 	"image_pulls_by_name",
	I0429 00:32:35.280740   54766 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0429 00:32:35.280751   54766 command_runner.go:130] > # 	"image_pulls_failures",
	I0429 00:32:35.280757   54766 command_runner.go:130] > # 	"image_pulls_successes",
	I0429 00:32:35.280764   54766 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0429 00:32:35.280773   54766 command_runner.go:130] > # 	"image_layer_reuse",
	I0429 00:32:35.280781   54766 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0429 00:32:35.280794   54766 command_runner.go:130] > # 	"containers_oom_total",
	I0429 00:32:35.280804   54766 command_runner.go:130] > # 	"containers_oom",
	I0429 00:32:35.280811   54766 command_runner.go:130] > # 	"processes_defunct",
	I0429 00:32:35.280818   54766 command_runner.go:130] > # 	"operations_total",
	I0429 00:32:35.280825   54766 command_runner.go:130] > # 	"operations_latency_seconds",
	I0429 00:32:35.280831   54766 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0429 00:32:35.280836   54766 command_runner.go:130] > # 	"operations_errors_total",
	I0429 00:32:35.280840   54766 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0429 00:32:35.280852   54766 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0429 00:32:35.280857   54766 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0429 00:32:35.280861   54766 command_runner.go:130] > # 	"image_pulls_success_total",
	I0429 00:32:35.280868   54766 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0429 00:32:35.280872   54766 command_runner.go:130] > # 	"containers_oom_count_total",
	I0429 00:32:35.280881   54766 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0429 00:32:35.280888   54766 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0429 00:32:35.280891   54766 command_runner.go:130] > # ]
	I0429 00:32:35.280898   54766 command_runner.go:130] > # The port on which the metrics server will listen.
	I0429 00:32:35.280903   54766 command_runner.go:130] > # metrics_port = 9090
	I0429 00:32:35.280910   54766 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0429 00:32:35.280916   54766 command_runner.go:130] > # metrics_socket = ""
	I0429 00:32:35.280921   54766 command_runner.go:130] > # The certificate for the secure metrics server.
	I0429 00:32:35.280929   54766 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0429 00:32:35.280937   54766 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0429 00:32:35.280944   54766 command_runner.go:130] > # certificate on any modification event.
	I0429 00:32:35.280948   54766 command_runner.go:130] > # metrics_cert = ""
	I0429 00:32:35.280955   54766 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0429 00:32:35.280959   54766 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0429 00:32:35.280966   54766 command_runner.go:130] > # metrics_key = ""
	I0429 00:32:35.280972   54766 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0429 00:32:35.280978   54766 command_runner.go:130] > [crio.tracing]
	I0429 00:32:35.280983   54766 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0429 00:32:35.280989   54766 command_runner.go:130] > # enable_tracing = false
	I0429 00:32:35.280995   54766 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0429 00:32:35.281001   54766 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0429 00:32:35.281007   54766 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0429 00:32:35.281014   54766 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0429 00:32:35.281018   54766 command_runner.go:130] > # CRI-O NRI configuration.
	I0429 00:32:35.281022   54766 command_runner.go:130] > [crio.nri]
	I0429 00:32:35.281027   54766 command_runner.go:130] > # Globally enable or disable NRI.
	I0429 00:32:35.281030   54766 command_runner.go:130] > # enable_nri = false
	I0429 00:32:35.281034   54766 command_runner.go:130] > # NRI socket to listen on.
	I0429 00:32:35.281038   54766 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0429 00:32:35.281044   54766 command_runner.go:130] > # NRI plugin directory to use.
	I0429 00:32:35.281049   54766 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0429 00:32:35.281062   54766 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0429 00:32:35.281070   54766 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0429 00:32:35.281082   54766 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0429 00:32:35.281091   54766 command_runner.go:130] > # nri_disable_connections = false
	I0429 00:32:35.281098   54766 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0429 00:32:35.281107   54766 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0429 00:32:35.281116   54766 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0429 00:32:35.281122   54766 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0429 00:32:35.281128   54766 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0429 00:32:35.281135   54766 command_runner.go:130] > [crio.stats]
	I0429 00:32:35.281140   54766 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0429 00:32:35.281147   54766 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0429 00:32:35.281154   54766 command_runner.go:130] > # stats_collection_period = 0
	I0429 00:32:35.281186   54766 command_runner.go:130] ! time="2024-04-29 00:32:35.237188614Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0429 00:32:35.281200   54766 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0429 00:32:35.281311   54766 cni.go:84] Creating CNI manager for ""
	I0429 00:32:35.281321   54766 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 00:32:35.281329   54766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 00:32:35.281348   54766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-061470 NodeName:multinode-061470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 00:32:35.281481   54766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-061470"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 00:32:35.281535   54766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 00:32:35.293041   54766 command_runner.go:130] > kubeadm
	I0429 00:32:35.293055   54766 command_runner.go:130] > kubectl
	I0429 00:32:35.293058   54766 command_runner.go:130] > kubelet
	I0429 00:32:35.293267   54766 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 00:32:35.293310   54766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 00:32:35.304088   54766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0429 00:32:35.325328   54766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 00:32:35.345389   54766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0429 00:32:35.365404   54766 ssh_runner.go:195] Run: grep 192.168.39.59	control-plane.minikube.internal$ /etc/hosts
	I0429 00:32:35.370056   54766 command_runner.go:130] > 192.168.39.59	control-plane.minikube.internal
	I0429 00:32:35.370149   54766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:32:35.521576   54766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 00:32:35.537744   54766 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470 for IP: 192.168.39.59
	I0429 00:32:35.537766   54766 certs.go:194] generating shared ca certs ...
	I0429 00:32:35.537787   54766 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:32:35.537959   54766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0429 00:32:35.538011   54766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0429 00:32:35.538043   54766 certs.go:256] generating profile certs ...
	I0429 00:32:35.538133   54766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/client.key
	I0429 00:32:35.538191   54766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/apiserver.key.e02763ff
	I0429 00:32:35.538233   54766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/proxy-client.key
	I0429 00:32:35.538244   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 00:32:35.538259   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 00:32:35.538281   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 00:32:35.538294   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 00:32:35.538308   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 00:32:35.538322   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 00:32:35.538342   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 00:32:35.538360   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 00:32:35.538426   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0429 00:32:35.538459   54766 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0429 00:32:35.538469   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 00:32:35.538489   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0429 00:32:35.538511   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0429 00:32:35.538531   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0429 00:32:35.538567   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:32:35.538596   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:32:35.538610   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem -> /usr/share/ca-certificates/20727.pem
	I0429 00:32:35.538622   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /usr/share/ca-certificates/207272.pem
	I0429 00:32:35.539223   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 00:32:35.566932   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 00:32:35.593631   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 00:32:35.619833   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 00:32:35.646406   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 00:32:35.673295   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 00:32:35.699643   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 00:32:35.725752   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 00:32:35.755049   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 00:32:35.782746   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0429 00:32:35.808284   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0429 00:32:35.834917   54766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 00:32:35.853682   54766 ssh_runner.go:195] Run: openssl version
	I0429 00:32:35.860989   54766 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 00:32:35.861059   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 00:32:35.874367   54766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:32:35.879825   54766 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:32:35.879983   54766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:32:35.880043   54766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:32:35.886504   54766 command_runner.go:130] > b5213941
	I0429 00:32:35.886784   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 00:32:35.898242   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0429 00:32:35.911836   54766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0429 00:32:35.917785   54766 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0429 00:32:35.917819   54766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0429 00:32:35.917859   54766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0429 00:32:35.924408   54766 command_runner.go:130] > 51391683
	I0429 00:32:35.924976   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0429 00:32:35.936601   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0429 00:32:35.949641   54766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0429 00:32:35.955102   54766 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0429 00:32:35.955142   54766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0429 00:32:35.955193   54766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0429 00:32:35.961553   54766 command_runner.go:130] > 3ec20f2e
	I0429 00:32:35.961756   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 00:32:35.973244   54766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 00:32:35.978657   54766 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 00:32:35.978688   54766 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0429 00:32:35.978699   54766 command_runner.go:130] > Device: 253,1	Inode: 1057302     Links: 1
	I0429 00:32:35.978708   54766 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 00:32:35.978717   54766 command_runner.go:130] > Access: 2024-04-29 00:25:43.098932222 +0000
	I0429 00:32:35.978725   54766 command_runner.go:130] > Modify: 2024-04-29 00:25:43.098932222 +0000
	I0429 00:32:35.978732   54766 command_runner.go:130] > Change: 2024-04-29 00:25:43.098932222 +0000
	I0429 00:32:35.978743   54766 command_runner.go:130] >  Birth: 2024-04-29 00:25:43.098932222 +0000
	I0429 00:32:35.978860   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 00:32:35.985032   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:35.985264   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 00:32:35.991557   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:35.991620   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 00:32:35.997326   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:35.997642   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 00:32:36.003543   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:36.003609   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 00:32:36.009637   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:36.009696   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 00:32:36.015367   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:36.015646   54766 kubeadm.go:391] StartCluster: {Name:multinode-061470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-061470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.153 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:32:36.015765   54766 cri.go:56] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 00:32:36.015799   54766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 00:32:36.057490   54766 command_runner.go:130] > b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039
	I0429 00:32:36.057519   54766 command_runner.go:130] > 39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75
	I0429 00:32:36.057528   54766 command_runner.go:130] > b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059
	I0429 00:32:36.057538   54766 command_runner.go:130] > 54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6
	I0429 00:32:36.057547   54766 command_runner.go:130] > 97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f
	I0429 00:32:36.057556   54766 command_runner.go:130] > 7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29
	I0429 00:32:36.057564   54766 command_runner.go:130] > feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23
	I0429 00:32:36.057577   54766 command_runner.go:130] > 3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8
	I0429 00:32:36.057603   54766 cri.go:91] found id: "b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039"
	I0429 00:32:36.057614   54766 cri.go:91] found id: "39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75"
	I0429 00:32:36.057617   54766 cri.go:91] found id: "b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059"
	I0429 00:32:36.057620   54766 cri.go:91] found id: "54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6"
	I0429 00:32:36.057623   54766 cri.go:91] found id: "97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f"
	I0429 00:32:36.057626   54766 cri.go:91] found id: "7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29"
	I0429 00:32:36.057629   54766 cri.go:91] found id: "feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23"
	I0429 00:32:36.057632   54766 cri.go:91] found id: "3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8"
	I0429 00:32:36.057634   54766 cri.go:91] found id: ""
	I0429 00:32:36.057676   54766 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.149510363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714350845149487591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd060b9d-1cb0-4a1d-84ba-748f282206f4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.150747425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c670a34-6c5d-421e-b0cd-86bb58777d07 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.150878873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c670a34-6c5d-421e-b0cd-86bb58777d07 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.151365081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b5ac9b0cc33883a50fd9674275191db8f70709b986c0ba81c0e362aad173df,PodSandboxId:11a98ae46b871c2424e18240709ec880ae353bb84bf00d93c5e401ce373aaeaa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714350796826593379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e26fec3a142e13faafa8ad14b597baa4f169d898e442d71ea6593487e179dad,PodSandboxId:7321ac60318dc36e1bad6ba03af91c9b937865c02ac5c28768c3e04449fd28ff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714350763219241740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64ecb02161b1f0eae6b006a2ff262795087379662bbfe4f533b0c23db813ef4,PodSandboxId:5eb7a0edb505d46781f1b243bfb44ca4c5da556cfa70be541745641eab2f8ffa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714350763210200552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:434862f131515b0ec8bd3f1e8282e616a055da4a1f656365faaf7995d4859312,PodSandboxId:07fd2d5fe19c65029756045b93a79459240e6db1141be65fae10db39fa8c17ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714350763061060391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},An
notations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea0e03bd31c39cdc62d68ffb54d9b093e1972a03f177dd3044c678276c372b8,PodSandboxId:20df3ee242e021fe3c6ddb2912ca44f9aaea43551447852a53d36bbac4602211,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714350762981650451,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.ku
bernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42726d45ab665060d58b0ec625d7e61b8c4c797c9f574d7014ff557cd3b869b1,PodSandboxId:9ab5d11f11726f505e5192e6254501e80263cd25a8050ca9f861a8f32d02327e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714350758297992228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b818219749ed474e9eb70ef151dab9f30534328bef9c0392cdb66452ff83a74e,PodSandboxId:0a141e8c6c052b63c751a4d91278b043a7b596d03103cc7fe38669e9729acdaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714350758271619402,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.container.hash: 684e9b0f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54ce77f7f9a167e01e6facda017ced58943135aefc433b77571c564a98ce4f,PodSandboxId:01138626639dbc7e87846f2ae5d9bd5116f42d688bab7d48a331a7e23aa90d0a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714350758243235511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.container.hash: e5a050a9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f086e122efd0a79db5726a580c8eb8fa99eae5ef2c1d677fb3aa2b679bfb2254,PodSandboxId:49f8e4233acee3d2381c28543d15d29ecb15fce13b618d8f6aae3c1f5cc03895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714350758180390934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff2d8cb543c19f935d9c27d3aac5a442c16d6865a8f1c527d92e67889886f00,PodSandboxId:dfca3a66574d8167fadd34f1abac5706488495a4a37b8a0a15e5bd58cf9f55d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714350449425275009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039,PodSandboxId:12ec373cd24498abc6408815fe4fa91c2c8b045a1e3017c3abddf6dbdef634b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714350398670874161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75,PodSandboxId:60584a1c18ea626b3110754275f3babc64792abd09c2a39f33b1be3fa0509c64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714350398632218486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059,PodSandboxId:b8932dfb71a1719c922eee271a6a39aa39fdcd77238e38b5d725e5b8c312cc2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714350367555264264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.kubernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6,PodSandboxId:2bb00020d94fba790b5d58be7024d99ce0931b5d287ae21cb43dcc34bc001240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714350366888723697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38
-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f,PodSandboxId:ceb5f4560679f615358bb4af1b2c11decfbcb9e187bf987ba508233712ff8918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714350347046108852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29,PodSandboxId:ddd9273f314062659291627862a001f384eebf1fc1f4056d6efd5643e46bb5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714350347035526788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.
container.hash: 684e9b0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23,PodSandboxId:693b9ed36dcc45cae5322b8d498485c2083c0c8e56992b61b3ff71b120c02bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714350347000354463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8,PodSandboxId:fe519d3311f341b7c2a63faac9551fd45c17f3bf02147c5ef4417da6706cfe19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714350346931161196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: e5a050a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c670a34-6c5d-421e-b0cd-86bb58777d07 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.198619272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf373b7e-c3f2-4e74-bc68-f93d1f3d8c03 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.198721308Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf373b7e-c3f2-4e74-bc68-f93d1f3d8c03 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.200691416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3c01118-6938-4914-a32c-300d5991d669 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.201143619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714350845201118981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3c01118-6938-4914-a32c-300d5991d669 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.201814979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0234233-027a-4005-98e0-0f00ef81a27e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.201987062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0234233-027a-4005-98e0-0f00ef81a27e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.202309672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b5ac9b0cc33883a50fd9674275191db8f70709b986c0ba81c0e362aad173df,PodSandboxId:11a98ae46b871c2424e18240709ec880ae353bb84bf00d93c5e401ce373aaeaa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714350796826593379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e26fec3a142e13faafa8ad14b597baa4f169d898e442d71ea6593487e179dad,PodSandboxId:7321ac60318dc36e1bad6ba03af91c9b937865c02ac5c28768c3e04449fd28ff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714350763219241740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64ecb02161b1f0eae6b006a2ff262795087379662bbfe4f533b0c23db813ef4,PodSandboxId:5eb7a0edb505d46781f1b243bfb44ca4c5da556cfa70be541745641eab2f8ffa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714350763210200552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:434862f131515b0ec8bd3f1e8282e616a055da4a1f656365faaf7995d4859312,PodSandboxId:07fd2d5fe19c65029756045b93a79459240e6db1141be65fae10db39fa8c17ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714350763061060391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},An
notations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea0e03bd31c39cdc62d68ffb54d9b093e1972a03f177dd3044c678276c372b8,PodSandboxId:20df3ee242e021fe3c6ddb2912ca44f9aaea43551447852a53d36bbac4602211,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714350762981650451,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.ku
bernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42726d45ab665060d58b0ec625d7e61b8c4c797c9f574d7014ff557cd3b869b1,PodSandboxId:9ab5d11f11726f505e5192e6254501e80263cd25a8050ca9f861a8f32d02327e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714350758297992228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b818219749ed474e9eb70ef151dab9f30534328bef9c0392cdb66452ff83a74e,PodSandboxId:0a141e8c6c052b63c751a4d91278b043a7b596d03103cc7fe38669e9729acdaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714350758271619402,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.container.hash: 684e9b0f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54ce77f7f9a167e01e6facda017ced58943135aefc433b77571c564a98ce4f,PodSandboxId:01138626639dbc7e87846f2ae5d9bd5116f42d688bab7d48a331a7e23aa90d0a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714350758243235511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.container.hash: e5a050a9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f086e122efd0a79db5726a580c8eb8fa99eae5ef2c1d677fb3aa2b679bfb2254,PodSandboxId:49f8e4233acee3d2381c28543d15d29ecb15fce13b618d8f6aae3c1f5cc03895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714350758180390934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff2d8cb543c19f935d9c27d3aac5a442c16d6865a8f1c527d92e67889886f00,PodSandboxId:dfca3a66574d8167fadd34f1abac5706488495a4a37b8a0a15e5bd58cf9f55d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714350449425275009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039,PodSandboxId:12ec373cd24498abc6408815fe4fa91c2c8b045a1e3017c3abddf6dbdef634b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714350398670874161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75,PodSandboxId:60584a1c18ea626b3110754275f3babc64792abd09c2a39f33b1be3fa0509c64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714350398632218486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059,PodSandboxId:b8932dfb71a1719c922eee271a6a39aa39fdcd77238e38b5d725e5b8c312cc2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714350367555264264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.kubernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6,PodSandboxId:2bb00020d94fba790b5d58be7024d99ce0931b5d287ae21cb43dcc34bc001240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714350366888723697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38
-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f,PodSandboxId:ceb5f4560679f615358bb4af1b2c11decfbcb9e187bf987ba508233712ff8918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714350347046108852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29,PodSandboxId:ddd9273f314062659291627862a001f384eebf1fc1f4056d6efd5643e46bb5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714350347035526788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.
container.hash: 684e9b0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23,PodSandboxId:693b9ed36dcc45cae5322b8d498485c2083c0c8e56992b61b3ff71b120c02bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714350347000354463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8,PodSandboxId:fe519d3311f341b7c2a63faac9551fd45c17f3bf02147c5ef4417da6706cfe19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714350346931161196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: e5a050a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0234233-027a-4005-98e0-0f00ef81a27e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.251525914Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e88acd6f-c54e-4016-9bcb-f0030620143a name=/runtime.v1.RuntimeService/Version
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.251659439Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e88acd6f-c54e-4016-9bcb-f0030620143a name=/runtime.v1.RuntimeService/Version
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.252627333Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0026889-9c46-4b0b-b3d0-f4a8d9acd70a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.253346104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714350845253315122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0026889-9c46-4b0b-b3d0-f4a8d9acd70a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.254224452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e12bf77-e06d-4e59-a51a-6d8c205351af name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.254304786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e12bf77-e06d-4e59-a51a-6d8c205351af name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.254646871Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b5ac9b0cc33883a50fd9674275191db8f70709b986c0ba81c0e362aad173df,PodSandboxId:11a98ae46b871c2424e18240709ec880ae353bb84bf00d93c5e401ce373aaeaa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714350796826593379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e26fec3a142e13faafa8ad14b597baa4f169d898e442d71ea6593487e179dad,PodSandboxId:7321ac60318dc36e1bad6ba03af91c9b937865c02ac5c28768c3e04449fd28ff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714350763219241740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64ecb02161b1f0eae6b006a2ff262795087379662bbfe4f533b0c23db813ef4,PodSandboxId:5eb7a0edb505d46781f1b243bfb44ca4c5da556cfa70be541745641eab2f8ffa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714350763210200552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:434862f131515b0ec8bd3f1e8282e616a055da4a1f656365faaf7995d4859312,PodSandboxId:07fd2d5fe19c65029756045b93a79459240e6db1141be65fae10db39fa8c17ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714350763061060391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},An
notations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea0e03bd31c39cdc62d68ffb54d9b093e1972a03f177dd3044c678276c372b8,PodSandboxId:20df3ee242e021fe3c6ddb2912ca44f9aaea43551447852a53d36bbac4602211,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714350762981650451,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.ku
bernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42726d45ab665060d58b0ec625d7e61b8c4c797c9f574d7014ff557cd3b869b1,PodSandboxId:9ab5d11f11726f505e5192e6254501e80263cd25a8050ca9f861a8f32d02327e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714350758297992228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b818219749ed474e9eb70ef151dab9f30534328bef9c0392cdb66452ff83a74e,PodSandboxId:0a141e8c6c052b63c751a4d91278b043a7b596d03103cc7fe38669e9729acdaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714350758271619402,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.container.hash: 684e9b0f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54ce77f7f9a167e01e6facda017ced58943135aefc433b77571c564a98ce4f,PodSandboxId:01138626639dbc7e87846f2ae5d9bd5116f42d688bab7d48a331a7e23aa90d0a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714350758243235511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.container.hash: e5a050a9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f086e122efd0a79db5726a580c8eb8fa99eae5ef2c1d677fb3aa2b679bfb2254,PodSandboxId:49f8e4233acee3d2381c28543d15d29ecb15fce13b618d8f6aae3c1f5cc03895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714350758180390934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff2d8cb543c19f935d9c27d3aac5a442c16d6865a8f1c527d92e67889886f00,PodSandboxId:dfca3a66574d8167fadd34f1abac5706488495a4a37b8a0a15e5bd58cf9f55d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714350449425275009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039,PodSandboxId:12ec373cd24498abc6408815fe4fa91c2c8b045a1e3017c3abddf6dbdef634b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714350398670874161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75,PodSandboxId:60584a1c18ea626b3110754275f3babc64792abd09c2a39f33b1be3fa0509c64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714350398632218486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059,PodSandboxId:b8932dfb71a1719c922eee271a6a39aa39fdcd77238e38b5d725e5b8c312cc2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714350367555264264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.kubernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6,PodSandboxId:2bb00020d94fba790b5d58be7024d99ce0931b5d287ae21cb43dcc34bc001240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714350366888723697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38
-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f,PodSandboxId:ceb5f4560679f615358bb4af1b2c11decfbcb9e187bf987ba508233712ff8918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714350347046108852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29,PodSandboxId:ddd9273f314062659291627862a001f384eebf1fc1f4056d6efd5643e46bb5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714350347035526788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.
container.hash: 684e9b0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23,PodSandboxId:693b9ed36dcc45cae5322b8d498485c2083c0c8e56992b61b3ff71b120c02bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714350347000354463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8,PodSandboxId:fe519d3311f341b7c2a63faac9551fd45c17f3bf02147c5ef4417da6706cfe19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714350346931161196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: e5a050a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e12bf77-e06d-4e59-a51a-6d8c205351af name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.305317032Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45c78813-2e6e-438f-ac89-c12262651a06 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.305436414Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45c78813-2e6e-438f-ac89-c12262651a06 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.306609867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea4ff709-31ff-436d-b636-1ecc7205d04a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.307136263Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714350845307105588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea4ff709-31ff-436d-b636-1ecc7205d04a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.307915496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0f970be-c477-4ef5-8ad1-e1bcd0cb6faa name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.307999000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0f970be-c477-4ef5-8ad1-e1bcd0cb6faa name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:34:05 multinode-061470 crio[2855]: time="2024-04-29 00:34:05.308467175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b5ac9b0cc33883a50fd9674275191db8f70709b986c0ba81c0e362aad173df,PodSandboxId:11a98ae46b871c2424e18240709ec880ae353bb84bf00d93c5e401ce373aaeaa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714350796826593379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e26fec3a142e13faafa8ad14b597baa4f169d898e442d71ea6593487e179dad,PodSandboxId:7321ac60318dc36e1bad6ba03af91c9b937865c02ac5c28768c3e04449fd28ff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714350763219241740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64ecb02161b1f0eae6b006a2ff262795087379662bbfe4f533b0c23db813ef4,PodSandboxId:5eb7a0edb505d46781f1b243bfb44ca4c5da556cfa70be541745641eab2f8ffa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714350763210200552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:434862f131515b0ec8bd3f1e8282e616a055da4a1f656365faaf7995d4859312,PodSandboxId:07fd2d5fe19c65029756045b93a79459240e6db1141be65fae10db39fa8c17ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714350763061060391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},An
notations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea0e03bd31c39cdc62d68ffb54d9b093e1972a03f177dd3044c678276c372b8,PodSandboxId:20df3ee242e021fe3c6ddb2912ca44f9aaea43551447852a53d36bbac4602211,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714350762981650451,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.ku
bernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42726d45ab665060d58b0ec625d7e61b8c4c797c9f574d7014ff557cd3b869b1,PodSandboxId:9ab5d11f11726f505e5192e6254501e80263cd25a8050ca9f861a8f32d02327e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714350758297992228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b818219749ed474e9eb70ef151dab9f30534328bef9c0392cdb66452ff83a74e,PodSandboxId:0a141e8c6c052b63c751a4d91278b043a7b596d03103cc7fe38669e9729acdaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714350758271619402,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.container.hash: 684e9b0f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54ce77f7f9a167e01e6facda017ced58943135aefc433b77571c564a98ce4f,PodSandboxId:01138626639dbc7e87846f2ae5d9bd5116f42d688bab7d48a331a7e23aa90d0a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714350758243235511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.container.hash: e5a050a9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f086e122efd0a79db5726a580c8eb8fa99eae5ef2c1d677fb3aa2b679bfb2254,PodSandboxId:49f8e4233acee3d2381c28543d15d29ecb15fce13b618d8f6aae3c1f5cc03895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714350758180390934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff2d8cb543c19f935d9c27d3aac5a442c16d6865a8f1c527d92e67889886f00,PodSandboxId:dfca3a66574d8167fadd34f1abac5706488495a4a37b8a0a15e5bd58cf9f55d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714350449425275009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039,PodSandboxId:12ec373cd24498abc6408815fe4fa91c2c8b045a1e3017c3abddf6dbdef634b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714350398670874161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75,PodSandboxId:60584a1c18ea626b3110754275f3babc64792abd09c2a39f33b1be3fa0509c64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714350398632218486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059,PodSandboxId:b8932dfb71a1719c922eee271a6a39aa39fdcd77238e38b5d725e5b8c312cc2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714350367555264264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.kubernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6,PodSandboxId:2bb00020d94fba790b5d58be7024d99ce0931b5d287ae21cb43dcc34bc001240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714350366888723697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38
-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f,PodSandboxId:ceb5f4560679f615358bb4af1b2c11decfbcb9e187bf987ba508233712ff8918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714350347046108852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29,PodSandboxId:ddd9273f314062659291627862a001f384eebf1fc1f4056d6efd5643e46bb5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714350347035526788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.
container.hash: 684e9b0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23,PodSandboxId:693b9ed36dcc45cae5322b8d498485c2083c0c8e56992b61b3ff71b120c02bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714350347000354463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8,PodSandboxId:fe519d3311f341b7c2a63faac9551fd45c17f3bf02147c5ef4417da6706cfe19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714350346931161196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: e5a050a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0f970be-c477-4ef5-8ad1-e1bcd0cb6faa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	11b5ac9b0cc33       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      48 seconds ago       Running             busybox                   1                   11a98ae46b871       busybox-fc5497c4f-hbcvz
	6e26fec3a142e       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   7321ac60318dc       kindnet-zqmjk
	a64ecb02161b1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   5eb7a0edb505d       coredns-7db6d8ff4d-r4bhp
	434862f131515       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   07fd2d5fe19c6       storage-provisioner
	9ea0e03bd31c3       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   20df3ee242e02       kube-proxy-4xgkq
	42726d45ab665       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   9ab5d11f11726       kube-scheduler-multinode-061470
	b818219749ed4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   0a141e8c6c052       etcd-multinode-061470
	4e54ce77f7f9a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            1                   01138626639db       kube-apiserver-multinode-061470
	f086e122efd0a       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   1                   49f8e4233acee       kube-controller-manager-multinode-061470
	dff2d8cb543c1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   dfca3a66574d8       busybox-fc5497c4f-hbcvz
	b96cb4f67c31d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   12ec373cd2449       coredns-7db6d8ff4d-r4bhp
	39b8488302397       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   60584a1c18ea6       storage-provisioner
	b61b00d21f43e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago        Exited              kube-proxy                0                   b8932dfb71a17       kube-proxy-4xgkq
	54136ed2ec098       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   2bb00020d94fb       kindnet-zqmjk
	97d87b80717b4       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      8 minutes ago        Exited              kube-scheduler            0                   ceb5f4560679f       kube-scheduler-multinode-061470
	7d498bb9fe676       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   ddd9273f31406       etcd-multinode-061470
	feb59e1dcd4cb       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      8 minutes ago        Exited              kube-controller-manager   0                   693b9ed36dcc4       kube-controller-manager-multinode-061470
	3831c13bc6184       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      8 minutes ago        Exited              kube-apiserver            0                   fe519d3311f34       kube-apiserver-multinode-061470
	
	
	==> coredns [a64ecb02161b1f0eae6b006a2ff262795087379662bbfe4f533b0c23db813ef4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45756 - 18737 "HINFO IN 434757041915740770.52446483414376084. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.013748968s
	
	
	==> coredns [b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039] <==
	[INFO] 10.244.1.2:39928 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00179859s
	[INFO] 10.244.1.2:48329 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177624s
	[INFO] 10.244.1.2:60954 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114302s
	[INFO] 10.244.1.2:39143 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001996062s
	[INFO] 10.244.1.2:57685 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000248408s
	[INFO] 10.244.1.2:55056 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000298412s
	[INFO] 10.244.1.2:57477 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000258113s
	[INFO] 10.244.0.3:44859 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078253s
	[INFO] 10.244.0.3:58305 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00005714s
	[INFO] 10.244.0.3:58583 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043115s
	[INFO] 10.244.0.3:35160 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093128s
	[INFO] 10.244.1.2:46440 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000296076s
	[INFO] 10.244.1.2:53786 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008941s
	[INFO] 10.244.1.2:55749 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075948s
	[INFO] 10.244.1.2:38358 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068043s
	[INFO] 10.244.0.3:46826 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116218s
	[INFO] 10.244.0.3:48256 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105442s
	[INFO] 10.244.0.3:50215 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094963s
	[INFO] 10.244.0.3:36144 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008138s
	[INFO] 10.244.1.2:50679 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126126s
	[INFO] 10.244.1.2:56121 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131799s
	[INFO] 10.244.1.2:34995 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091915s
	[INFO] 10.244.1.2:36269 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123998s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-061470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-061470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-061470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T00_25_53_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:25:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-061470
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:34:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:32:42 +0000   Mon, 29 Apr 2024 00:25:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:32:42 +0000   Mon, 29 Apr 2024 00:25:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:32:42 +0000   Mon, 29 Apr 2024 00:25:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:32:42 +0000   Mon, 29 Apr 2024 00:26:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.59
	  Hostname:    multinode-061470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ce16e3900734f698b7f07cee0a80904
	  System UUID:                3ce16e39-0073-4f69-8b7f-07cee0a80904
	  Boot ID:                    e490d2bd-22eb-4348-b16c-88ecf79bfed6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hbcvz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	  kube-system                 coredns-7db6d8ff4d-r4bhp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m59s
	  kube-system                 etcd-multinode-061470                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m15s
	  kube-system                 kindnet-zqmjk                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m59s
	  kube-system                 kube-apiserver-multinode-061470             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-controller-manager-multinode-061470    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-proxy-4xgkq                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m59s
	  kube-system                 kube-scheduler-multinode-061470             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m57s                  kube-proxy       
	  Normal  Starting                 82s                    kube-proxy       
	  Normal  Starting                 8m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m19s (x8 over 8m19s)  kubelet          Node multinode-061470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m19s (x8 over 8m19s)  kubelet          Node multinode-061470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m19s (x7 over 8m19s)  kubelet          Node multinode-061470 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m13s                  kubelet          Node multinode-061470 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m13s                  kubelet          Node multinode-061470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m13s                  kubelet          Node multinode-061470 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m13s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m                     node-controller  Node multinode-061470 event: Registered Node multinode-061470 in Controller
	  Normal  NodeReady                7m27s                  kubelet          Node multinode-061470 status is now: NodeReady
	  Normal  Starting                 88s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  88s (x8 over 88s)      kubelet          Node multinode-061470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 88s)      kubelet          Node multinode-061470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 88s)      kubelet          Node multinode-061470 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           70s                    node-controller  Node multinode-061470 event: Registered Node multinode-061470 in Controller
	
	
	Name:               multinode-061470-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-061470-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-061470
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T00_33_22_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:33:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-061470-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:34:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:33:52 +0000   Mon, 29 Apr 2024 00:33:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:33:52 +0000   Mon, 29 Apr 2024 00:33:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:33:52 +0000   Mon, 29 Apr 2024 00:33:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:33:52 +0000   Mon, 29 Apr 2024 00:33:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.153
	  Hostname:    multinode-061470-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27acee0a5e2241b4bd50a401d18843cf
	  System UUID:                27acee0a-5e22-41b4-bd50-a401d18843cf
	  Boot ID:                    350f2cfb-b4bb-4845-890b-f2a283ffbd2b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vxgzh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kindnet-gnscp              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m53s
	  kube-system                 kube-proxy-xzttx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 38s                    kube-proxy  
	  Normal  Starting                 6m48s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m54s (x2 over 6m54s)  kubelet     Node multinode-061470-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m54s (x2 over 6m54s)  kubelet     Node multinode-061470-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m54s (x2 over 6m54s)  kubelet     Node multinode-061470-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m54s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m53s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m43s                  kubelet     Node multinode-061470-m02 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    43s (x2 over 43s)      kubelet     Node multinode-061470-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x2 over 43s)      kubelet     Node multinode-061470-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  43s (x2 over 43s)      kubelet     Node multinode-061470-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                34s                    kubelet     Node multinode-061470-m02 status is now: NodeReady
	
	
	Name:               multinode-061470-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-061470-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-061470
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T00_33_53_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:33:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-061470-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:34:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:34:02 +0000   Mon, 29 Apr 2024 00:33:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:34:02 +0000   Mon, 29 Apr 2024 00:33:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:34:02 +0000   Mon, 29 Apr 2024 00:33:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:34:02 +0000   Mon, 29 Apr 2024 00:34:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.138
	  Hostname:    multinode-061470-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 04b426dd7dca4485a67e7a6fac6628d9
	  System UUID:                04b426dd-7dca-4485-a67e-7a6fac6628d9
	  Boot ID:                    aab0fec9-98e4-405d-8584-70e70f7b12ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8zgdq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m1s
	  kube-system                 kube-proxy-cjx8c    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m56s                  kube-proxy  
	  Normal  Starting                 8s                     kube-proxy  
	  Normal  Starting                 5m14s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m1s (x2 over 6m1s)    kubelet     Node multinode-061470-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x2 over 6m1s)    kubelet     Node multinode-061470-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x2 over 6m1s)    kubelet     Node multinode-061470-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m51s                  kubelet     Node multinode-061470-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m19s (x2 over 5m19s)  kubelet     Node multinode-061470-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m19s (x2 over 5m19s)  kubelet     Node multinode-061470-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m19s (x2 over 5m19s)  kubelet     Node multinode-061470-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m10s                  kubelet     Node multinode-061470-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet     Node multinode-061470-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet     Node multinode-061470-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet     Node multinode-061470-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-061470-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.068144] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.177533] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.166514] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.299233] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.853304] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.062834] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.192418] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.840254] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.715682] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.076554] kauditd_printk_skb: 41 callbacks suppressed
	[Apr29 00:26] systemd-fstab-generator[1470]: Ignoring "noauto" option for root device
	[  +0.137853] kauditd_printk_skb: 21 callbacks suppressed
	[ +33.128082] kauditd_printk_skb: 60 callbacks suppressed
	[Apr29 00:27] kauditd_printk_skb: 12 callbacks suppressed
	[Apr29 00:32] systemd-fstab-generator[2773]: Ignoring "noauto" option for root device
	[  +0.152624] systemd-fstab-generator[2785]: Ignoring "noauto" option for root device
	[  +0.182475] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.147543] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.298584] systemd-fstab-generator[2839]: Ignoring "noauto" option for root device
	[  +0.788816] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +1.822195] systemd-fstab-generator[3064]: Ignoring "noauto" option for root device
	[  +5.642751] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.898555] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.841080] systemd-fstab-generator[3889]: Ignoring "noauto" option for root device
	[Apr29 00:33] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29] <==
	{"level":"info","ts":"2024-04-29T00:25:47.707164Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:25:47.707202Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:25:47.712911Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:25:47.713077Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T00:25:47.717248Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T00:25:47.779674Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.59:2379"}
	{"level":"warn","ts":"2024-04-29T00:27:12.097624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.940301ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14211266022879824445 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:45388f273e548a3c>","response":"size:42"}
	{"level":"info","ts":"2024-04-29T00:27:12.097981Z","caller":"traceutil/trace.go:171","msg":"trace[1998598625] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"184.254409ms","start":"2024-04-29T00:27:11.913694Z","end":"2024-04-29T00:27:12.097949Z","steps":["trace[1998598625] 'process raft request'  (duration: 184.186259ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:27:12.098289Z","caller":"traceutil/trace.go:171","msg":"trace[1222526137] linearizableReadLoop","detail":"{readStateIndex:518; appliedIndex:517; }","duration":"244.509979ms","start":"2024-04-29T00:27:11.853769Z","end":"2024-04-29T00:27:12.098279Z","steps":["trace[1222526137] 'read index received'  (duration: 63.563665ms)","trace[1222526137] 'applied index is now lower than readState.Index'  (duration: 180.945365ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:27:12.098412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.623491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-061470-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-04-29T00:27:12.098459Z","caller":"traceutil/trace.go:171","msg":"trace[268372184] range","detail":"{range_begin:/registry/minions/multinode-061470-m02; range_end:; response_count:1; response_revision:493; }","duration":"244.705932ms","start":"2024-04-29T00:27:11.853747Z","end":"2024-04-29T00:27:12.098452Z","steps":["trace[268372184] 'agreement among raft nodes before linearized reading'  (duration: 244.59642ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:04.797642Z","caller":"traceutil/trace.go:171","msg":"trace[1370174191] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"191.342121ms","start":"2024-04-29T00:28:04.606283Z","end":"2024-04-29T00:28:04.797625Z","steps":["trace[1370174191] 'process raft request'  (duration: 142.758082ms)","trace[1370174191] 'compare'  (duration: 48.108418ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:28:04.797939Z","caller":"traceutil/trace.go:171","msg":"trace[907953116] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"174.800225ms","start":"2024-04-29T00:28:04.623122Z","end":"2024-04-29T00:28:04.797922Z","steps":["trace[907953116] 'process raft request'  (duration: 174.200967ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:05.633344Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.998326ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-061470-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:05.633427Z","caller":"traceutil/trace.go:171","msg":"trace[7548960] range","detail":"{range_begin:/registry/csinodes/multinode-061470-m03; range_end:; response_count:0; response_revision:656; }","duration":"124.123552ms","start":"2024-04-29T00:28:05.509287Z","end":"2024-04-29T00:28:05.633411Z","steps":["trace[7548960] 'range keys from in-memory index tree'  (duration: 123.919529ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:31:02.594355Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-29T00:31:02.594528Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-061470","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.59:2380"],"advertise-client-urls":["https://192.168.39.59:2379"]}
	{"level":"warn","ts":"2024-04-29T00:31:02.595014Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.59:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:31:02.595052Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.59:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:31:02.600543Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:31:02.600676Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T00:31:02.681346Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8376b9efef0ac538","current-leader-member-id":"8376b9efef0ac538"}
	{"level":"info","ts":"2024-04-29T00:31:02.684385Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.59:2380"}
	{"level":"info","ts":"2024-04-29T00:31:02.684549Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.59:2380"}
	{"level":"info","ts":"2024-04-29T00:31:02.684562Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-061470","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.59:2380"],"advertise-client-urls":["https://192.168.39.59:2379"]}
	
	
	==> etcd [b818219749ed474e9eb70ef151dab9f30534328bef9c0392cdb66452ff83a74e] <==
	{"level":"info","ts":"2024-04-29T00:32:39.022461Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:32:39.022481Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:32:39.022734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 switched to configuration voters=(9472963306379199800)"}
	{"level":"info","ts":"2024-04-29T00:32:39.02288Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ec2082d3763590b8","local-member-id":"8376b9efef0ac538","added-peer-id":"8376b9efef0ac538","added-peer-peer-urls":["https://192.168.39.59:2380"]}
	{"level":"info","ts":"2024-04-29T00:32:39.023031Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ec2082d3763590b8","local-member-id":"8376b9efef0ac538","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:32:39.023082Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:32:39.030335Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T00:32:39.030575Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8376b9efef0ac538","initial-advertise-peer-urls":["https://192.168.39.59:2380"],"listen-peer-urls":["https://192.168.39.59:2380"],"advertise-client-urls":["https://192.168.39.59:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.59:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T00:32:39.032952Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T00:32:39.033237Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.59:2380"}
	{"level":"info","ts":"2024-04-29T00:32:39.033273Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.59:2380"}
	{"level":"info","ts":"2024-04-29T00:32:40.861617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T00:32:40.861657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T00:32:40.861708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 received MsgPreVoteResp from 8376b9efef0ac538 at term 2"}
	{"level":"info","ts":"2024-04-29T00:32:40.861721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T00:32:40.861727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 received MsgVoteResp from 8376b9efef0ac538 at term 3"}
	{"level":"info","ts":"2024-04-29T00:32:40.861735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 became leader at term 3"}
	{"level":"info","ts":"2024-04-29T00:32:40.861747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8376b9efef0ac538 elected leader 8376b9efef0ac538 at term 3"}
	{"level":"info","ts":"2024-04-29T00:32:40.868149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:32:40.868097Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8376b9efef0ac538","local-member-attributes":"{Name:multinode-061470 ClientURLs:[https://192.168.39.59:2379]}","request-path":"/0/members/8376b9efef0ac538/attributes","cluster-id":"ec2082d3763590b8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T00:32:40.869113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:32:40.869367Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:32:40.869382Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T00:32:40.87147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.59:2379"}
	{"level":"info","ts":"2024-04-29T00:32:40.872325Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:34:05 up 8 min,  0 users,  load average: 0.29, 0.31, 0.17
	Linux multinode-061470 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6] <==
	I0429 00:30:17.956358       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	I0429 00:30:27.962131       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:30:27.962182       1 main.go:227] handling current node
	I0429 00:30:27.962194       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:30:27.962200       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:30:27.962335       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:30:27.962374       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	I0429 00:30:37.978753       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:30:37.979026       1 main.go:227] handling current node
	I0429 00:30:37.979137       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:30:37.979151       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:30:37.979497       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:30:37.979616       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	I0429 00:30:47.992965       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:30:47.993010       1 main.go:227] handling current node
	I0429 00:30:47.993021       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:30:47.993028       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:30:47.993137       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:30:47.993167       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	I0429 00:30:58.002051       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:30:58.002149       1 main.go:227] handling current node
	I0429 00:30:58.002177       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:30:58.002196       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:30:58.002359       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:30:58.002408       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [6e26fec3a142e13faafa8ad14b597baa4f169d898e442d71ea6593487e179dad] <==
	I0429 00:33:24.193219       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	I0429 00:33:34.203322       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:33:34.203409       1 main.go:227] handling current node
	I0429 00:33:34.203433       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:33:34.203451       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:33:34.203556       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:33:34.203575       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	I0429 00:33:44.212244       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:33:44.212448       1 main.go:227] handling current node
	I0429 00:33:44.212589       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:33:44.212724       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:33:44.213317       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:33:44.213478       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	I0429 00:33:54.226127       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:33:54.226231       1 main.go:227] handling current node
	I0429 00:33:54.226267       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:33:54.226299       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:33:54.226423       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:33:54.226444       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.2.0/24] 
	I0429 00:34:04.232032       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:34:04.232117       1 main.go:227] handling current node
	I0429 00:34:04.232149       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:34:04.232171       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:34:04.232288       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:34:04.232307       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8] <==
	E0429 00:31:02.611132       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0429 00:31:02.611347       1 controller.go:84] Shutting down OpenAPI AggregationController
	E0429 00:31:02.612348       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 00:31:02.613162       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 00:31:02.613220       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 00:31:02.613240       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0429 00:31:02.613687       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0429 00:31:02.613808       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0429 00:31:02.614057       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0429 00:31:02.614112       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0429 00:31:02.614136       1 controller.go:167] Shutting down OpenAPI controller
	I0429 00:31:02.614173       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0429 00:31:02.614195       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0429 00:31:02.614238       1 establishing_controller.go:87] Shutting down EstablishingController
	I0429 00:31:02.614259       1 naming_controller.go:302] Shutting down NamingConditionController
	I0429 00:31:02.614308       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0429 00:31:02.614321       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0429 00:31:02.614332       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0429 00:31:02.614362       1 available_controller.go:439] Shutting down AvailableConditionController
	I0429 00:31:02.614374       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0429 00:31:02.614385       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0429 00:31:02.614403       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0429 00:31:02.614445       1 controller.go:129] Ending legacy_token_tracking_controller
	I0429 00:31:02.614451       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0429 00:31:02.614467       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	
	
	==> kube-apiserver [4e54ce77f7f9a167e01e6facda017ced58943135aefc433b77571c564a98ce4f] <==
	I0429 00:32:42.242037       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0429 00:32:42.341898       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 00:32:42.342010       1 aggregator.go:165] initial CRD sync complete...
	I0429 00:32:42.342036       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 00:32:42.342058       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 00:32:42.342080       1 cache.go:39] Caches are synced for autoregister controller
	I0429 00:32:42.383495       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 00:32:42.383566       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 00:32:42.383662       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 00:32:42.384256       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 00:32:42.384429       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 00:32:42.384490       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 00:32:42.385984       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 00:32:42.388812       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 00:32:42.389397       1 policy_source.go:224] refreshing policies
	I0429 00:32:42.393051       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 00:32:42.398517       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 00:32:43.220641       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 00:32:44.612416       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:32:44.751580       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 00:32:44.776658       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:32:44.846204       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 00:32:44.859296       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 00:32:55.753740       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:32:55.854015       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f086e122efd0a79db5726a580c8eb8fa99eae5ef2c1d677fb3aa2b679bfb2254] <==
	I0429 00:32:56.190609       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 00:33:18.183493       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.114745ms"
	I0429 00:33:18.183665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.486µs"
	I0429 00:33:18.184040       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.228µs"
	I0429 00:33:18.193306       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.131537ms"
	I0429 00:33:18.193548       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.285µs"
	I0429 00:33:22.364170       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-061470-m02\" does not exist"
	I0429 00:33:22.379583       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-061470-m02" podCIDRs=["10.244.1.0/24"]
	I0429 00:33:24.239904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.667µs"
	I0429 00:33:24.281093       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.956µs"
	I0429 00:33:24.290783       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.642µs"
	I0429 00:33:24.325949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="157.337µs"
	I0429 00:33:24.330179       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.603µs"
	I0429 00:33:24.333161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.001µs"
	I0429 00:33:25.362502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.47µs"
	I0429 00:33:31.654711       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:33:31.683734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.846µs"
	I0429 00:33:31.705115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.434µs"
	I0429 00:33:34.748728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.043475ms"
	I0429 00:33:34.748891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.24µs"
	I0429 00:33:51.426192       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:33:52.693138       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-061470-m03\" does not exist"
	I0429 00:33:52.694044       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:33:52.707975       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-061470-m03" podCIDRs=["10.244.2.0/24"]
	I0429 00:34:02.090779       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	
	
	==> kube-controller-manager [feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23] <==
	I0429 00:27:12.100672       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-061470-m02\" does not exist"
	I0429 00:27:12.113236       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-061470-m02" podCIDRs=["10.244.1.0/24"]
	I0429 00:27:15.256280       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-061470-m02"
	I0429 00:27:22.316721       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:27:24.832159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.960936ms"
	I0429 00:27:24.851904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.586454ms"
	I0429 00:27:24.854166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="209.85µs"
	I0429 00:27:24.855878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.437µs"
	I0429 00:27:28.927035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.822473ms"
	I0429 00:27:28.927285       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.113µs"
	I0429 00:27:29.745165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.991057ms"
	I0429 00:27:29.745407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.182µs"
	I0429 00:28:04.800312       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-061470-m03\" does not exist"
	I0429 00:28:04.800632       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:28:04.817108       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-061470-m03" podCIDRs=["10.244.2.0/24"]
	I0429 00:28:05.273806       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-061470-m03"
	I0429 00:28:14.562249       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:28:45.677647       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:28:46.692491       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-061470-m03\" does not exist"
	I0429 00:28:46.692618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:28:46.707556       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-061470-m03" podCIDRs=["10.244.3.0/24"]
	I0429 00:28:55.851414       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:29:35.328662       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m03"
	I0429 00:29:35.396059       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.317012ms"
	I0429 00:29:35.396256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.501µs"
	
	
	==> kube-proxy [9ea0e03bd31c39cdc62d68ffb54d9b093e1972a03f177dd3044c678276c372b8] <==
	I0429 00:32:43.307656       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:32:43.347354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.59"]
	I0429 00:32:43.472031       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:32:43.472085       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:32:43.472102       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:32:43.479260       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:32:43.480102       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:32:43.480147       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:32:43.481046       1 config.go:192] "Starting service config controller"
	I0429 00:32:43.481103       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:32:43.481139       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:32:43.481170       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:32:43.481714       1 config.go:319] "Starting node config controller"
	I0429 00:32:43.481748       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:32:43.581875       1 shared_informer.go:320] Caches are synced for node config
	I0429 00:32:43.581989       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:32:43.581998       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059] <==
	I0429 00:26:07.691957       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:26:07.711121       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.59"]
	I0429 00:26:07.775449       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:26:07.775506       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:26:07.775525       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:26:07.778656       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:26:07.778914       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:26:07.778953       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:26:07.780528       1 config.go:192] "Starting service config controller"
	I0429 00:26:07.780571       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:26:07.780590       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:26:07.780594       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:26:07.782758       1 config.go:319] "Starting node config controller"
	I0429 00:26:07.782798       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:26:07.881373       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:26:07.881462       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:26:07.882942       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [42726d45ab665060d58b0ec625d7e61b8c4c797c9f574d7014ff557cd3b869b1] <==
	I0429 00:32:39.858657       1 serving.go:380] Generated self-signed cert in-memory
	W0429 00:32:42.257468       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 00:32:42.257518       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 00:32:42.257529       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 00:32:42.257537       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 00:32:42.310510       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 00:32:42.310651       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:32:42.314986       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 00:32:42.315031       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 00:32:42.315630       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 00:32:42.315733       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 00:32:42.415540       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f] <==
	E0429 00:25:49.572714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:25:49.572788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:25:49.572880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:25:49.572919       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:25:49.572891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:25:49.572989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:25:49.573804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:25:49.573101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 00:25:49.574568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 00:25:50.586460       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 00:25:50.586526       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 00:25:50.713787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:25:50.713916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 00:25:50.723597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:25:50.723780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 00:25:50.772022       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 00:25:50.772162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:25:50.780164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 00:25:50.780287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 00:25:50.807125       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:25:50.807212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:25:50.808086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:25:50.808153       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 00:25:53.755634       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 00:31:02.586814       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 00:32:38 multinode-061470 kubelet[3071]: W0429 00:32:38.587455    3071 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.59:8443: connect: connection refused
	Apr 29 00:32:38 multinode-061470 kubelet[3071]: E0429 00:32:38.587515    3071 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.59:8443: connect: connection refused
	Apr 29 00:32:39 multinode-061470 kubelet[3071]: I0429 00:32:39.001213    3071 kubelet_node_status.go:73] "Attempting to register node" node="multinode-061470"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.451329    3071 kubelet_node_status.go:112] "Node was previously registered" node="multinode-061470"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.451801    3071 kubelet_node_status.go:76] "Successfully registered node" node="multinode-061470"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.453342    3071 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.454474    3071 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.462746    3071 apiserver.go:52] "Watching apiserver"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.465618    3071 topology_manager.go:215] "Topology Admit Handler" podUID="313d1824-ed50-4033-8c64-33d4dc4b23a5" podNamespace="kube-system" podName="storage-provisioner"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.466200    3071 topology_manager.go:215] "Topology Admit Handler" podUID="5db303d0-3a93-40b8-a390-a902ebcaa71b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-r4bhp"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.466463    3071 topology_manager.go:215] "Topology Admit Handler" podUID="e8ab0204-4bf4-4426-9b38-b80b01ddccec" podNamespace="kube-system" podName="kindnet-zqmjk"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.466875    3071 topology_manager.go:215] "Topology Admit Handler" podUID="2e05361a-9929-4b79-988b-c81f3e3063bf" podNamespace="kube-system" podName="kube-proxy-4xgkq"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.466998    3071 topology_manager.go:215] "Topology Admit Handler" podUID="02c11dff-48e7-4ee6-b95a-ff6d46ecd635" podNamespace="default" podName="busybox-fc5497c4f-hbcvz"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.485216    3071 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490660    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e05361a-9929-4b79-988b-c81f3e3063bf-xtables-lock\") pod \"kube-proxy-4xgkq\" (UID: \"2e05361a-9929-4b79-988b-c81f3e3063bf\") " pod="kube-system/kube-proxy-4xgkq"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490723    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/313d1824-ed50-4033-8c64-33d4dc4b23a5-tmp\") pod \"storage-provisioner\" (UID: \"313d1824-ed50-4033-8c64-33d4dc4b23a5\") " pod="kube-system/storage-provisioner"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490761    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8ab0204-4bf4-4426-9b38-b80b01ddccec-lib-modules\") pod \"kindnet-zqmjk\" (UID: \"e8ab0204-4bf4-4426-9b38-b80b01ddccec\") " pod="kube-system/kindnet-zqmjk"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490784    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e05361a-9929-4b79-988b-c81f3e3063bf-lib-modules\") pod \"kube-proxy-4xgkq\" (UID: \"2e05361a-9929-4b79-988b-c81f3e3063bf\") " pod="kube-system/kube-proxy-4xgkq"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490865    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e8ab0204-4bf4-4426-9b38-b80b01ddccec-cni-cfg\") pod \"kindnet-zqmjk\" (UID: \"e8ab0204-4bf4-4426-9b38-b80b01ddccec\") " pod="kube-system/kindnet-zqmjk"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490915    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8ab0204-4bf4-4426-9b38-b80b01ddccec-xtables-lock\") pod \"kindnet-zqmjk\" (UID: \"e8ab0204-4bf4-4426-9b38-b80b01ddccec\") " pod="kube-system/kindnet-zqmjk"
	Apr 29 00:33:37 multinode-061470 kubelet[3071]: E0429 00:33:37.558337    3071 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:33:37 multinode-061470 kubelet[3071]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:33:37 multinode-061470 kubelet[3071]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:33:37 multinode-061470 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:33:37 multinode-061470 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 00:34:04.856995   55781 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17977-13393/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-061470 -n multinode-061470
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-061470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (308.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 stop
E0429 00:35:48.628504   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-061470 stop: exit status 82 (2m0.483934612s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-061470-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-061470 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-061470 status: exit status 3 (18.678402567s)

                                                
                                                
-- stdout --
	multinode-061470
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-061470-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 00:36:28.422341   56433 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.153:22: connect: no route to host
	E0429 00:36:28.422379   56433 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.153:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-061470 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-061470 -n multinode-061470
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-061470 logs -n 25: (1.591181666s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp multinode-061470-m02:/home/docker/cp-test.txt                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470:/home/docker/cp-test_multinode-061470-m02_multinode-061470.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n multinode-061470 sudo cat                                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | /home/docker/cp-test_multinode-061470-m02_multinode-061470.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp multinode-061470-m02:/home/docker/cp-test.txt                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03:/home/docker/cp-test_multinode-061470-m02_multinode-061470-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n multinode-061470-m03 sudo cat                                   | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | /home/docker/cp-test_multinode-061470-m02_multinode-061470-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp testdata/cp-test.txt                                                | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp multinode-061470-m03:/home/docker/cp-test.txt                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3750174102/001/cp-test_multinode-061470-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp multinode-061470-m03:/home/docker/cp-test.txt                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470:/home/docker/cp-test_multinode-061470-m03_multinode-061470.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n multinode-061470 sudo cat                                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | /home/docker/cp-test_multinode-061470-m03_multinode-061470.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-061470 cp multinode-061470-m03:/home/docker/cp-test.txt                       | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m02:/home/docker/cp-test_multinode-061470-m03_multinode-061470-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n                                                                 | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | multinode-061470-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-061470 ssh -n multinode-061470-m02 sudo cat                                   | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | /home/docker/cp-test_multinode-061470-m03_multinode-061470-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-061470 node stop m03                                                          | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	| node    | multinode-061470 node start                                                             | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC | 29 Apr 24 00:28 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-061470                                                                | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC |                     |
	| stop    | -p multinode-061470                                                                     | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:28 UTC |                     |
	| start   | -p multinode-061470                                                                     | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:31 UTC | 29 Apr 24 00:34 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-061470                                                                | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:34 UTC |                     |
	| node    | multinode-061470 node delete                                                            | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:34 UTC | 29 Apr 24 00:34 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-061470 stop                                                                   | multinode-061470 | jenkins | v1.33.0 | 29 Apr 24 00:34 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 00:31:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 00:31:01.590828   54766 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:31:01.590927   54766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:31:01.590936   54766 out.go:304] Setting ErrFile to fd 2...
	I0429 00:31:01.590940   54766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:31:01.591129   54766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:31:01.591694   54766 out.go:298] Setting JSON to false
	I0429 00:31:01.592598   54766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8006,"bootTime":1714342656,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 00:31:01.592659   54766 start.go:139] virtualization: kvm guest
	I0429 00:31:01.595226   54766 out.go:177] * [multinode-061470] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 00:31:01.596640   54766 out.go:177]   - MINIKUBE_LOCATION=17977
	I0429 00:31:01.596638   54766 notify.go:220] Checking for updates...
	I0429 00:31:01.598072   54766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 00:31:01.599558   54766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0429 00:31:01.600862   54766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:31:01.602210   54766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 00:31:01.603595   54766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 00:31:01.605266   54766 config.go:182] Loaded profile config "multinode-061470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:31:01.605363   54766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 00:31:01.605754   54766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:31:01.605804   54766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:31:01.621133   54766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44531
	I0429 00:31:01.621604   54766 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:31:01.622227   54766 main.go:141] libmachine: Using API Version  1
	I0429 00:31:01.622260   54766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:31:01.622602   54766 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:31:01.622770   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:31:01.660609   54766 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 00:31:01.661924   54766 start.go:297] selected driver: kvm2
	I0429 00:31:01.661942   54766 start.go:901] validating driver "kvm2" against &{Name:multinode-061470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-061470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.153 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:31:01.662120   54766 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 00:31:01.662434   54766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:31:01.662530   54766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 00:31:01.677573   54766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 00:31:01.678256   54766 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 00:31:01.678328   54766 cni.go:84] Creating CNI manager for ""
	I0429 00:31:01.678340   54766 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 00:31:01.678399   54766 start.go:340] cluster config:
	{Name:multinode-061470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-061470 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.153 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:31:01.678534   54766 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:31:01.680326   54766 out.go:177] * Starting "multinode-061470" primary control-plane node in "multinode-061470" cluster
	I0429 00:31:01.681663   54766 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:31:01.681702   54766 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 00:31:01.681709   54766 cache.go:56] Caching tarball of preloaded images
	I0429 00:31:01.681785   54766 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 00:31:01.681796   54766 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 00:31:01.681915   54766 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/config.json ...
	I0429 00:31:01.682168   54766 start.go:360] acquireMachinesLock for multinode-061470: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 00:31:01.682214   54766 start.go:364] duration metric: took 26.247µs to acquireMachinesLock for "multinode-061470"
	I0429 00:31:01.682228   54766 start.go:96] Skipping create...Using existing machine configuration
	I0429 00:31:01.682235   54766 fix.go:54] fixHost starting: 
	I0429 00:31:01.682491   54766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:31:01.682521   54766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:31:01.697486   54766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41903
	I0429 00:31:01.698093   54766 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:31:01.698595   54766 main.go:141] libmachine: Using API Version  1
	I0429 00:31:01.698614   54766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:31:01.698904   54766 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:31:01.699089   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:31:01.699229   54766 main.go:141] libmachine: (multinode-061470) Calling .GetState
	I0429 00:31:01.700860   54766 fix.go:112] recreateIfNeeded on multinode-061470: state=Running err=<nil>
	W0429 00:31:01.700883   54766 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 00:31:01.703899   54766 out.go:177] * Updating the running kvm2 "multinode-061470" VM ...
	I0429 00:31:01.705108   54766 machine.go:94] provisionDockerMachine start ...
	I0429 00:31:01.705127   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:31:01.705320   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:01.707794   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.708271   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:01.708304   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.708487   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:31:01.708635   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.708788   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.708938   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:31:01.709064   54766 main.go:141] libmachine: Using SSH client type: native
	I0429 00:31:01.709250   54766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0429 00:31:01.709265   54766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 00:31:01.827933   54766 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-061470
	
	I0429 00:31:01.827965   54766 main.go:141] libmachine: (multinode-061470) Calling .GetMachineName
	I0429 00:31:01.828240   54766 buildroot.go:166] provisioning hostname "multinode-061470"
	I0429 00:31:01.828275   54766 main.go:141] libmachine: (multinode-061470) Calling .GetMachineName
	I0429 00:31:01.828475   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:01.831103   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.831527   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:01.831554   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.831678   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:31:01.831880   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.832035   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.832177   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:31:01.832358   54766 main.go:141] libmachine: Using SSH client type: native
	I0429 00:31:01.832506   54766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0429 00:31:01.832517   54766 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-061470 && echo "multinode-061470" | sudo tee /etc/hostname
	I0429 00:31:01.961375   54766 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-061470
	
	I0429 00:31:01.961404   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:01.964020   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.964344   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:01.964381   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:01.964513   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:31:01.964744   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.964922   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:01.965088   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:31:01.965263   54766 main.go:141] libmachine: Using SSH client type: native
	I0429 00:31:01.965476   54766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0429 00:31:01.965500   54766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-061470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-061470/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-061470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 00:31:02.075165   54766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:31:02.075190   54766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0429 00:31:02.075216   54766 buildroot.go:174] setting up certificates
	I0429 00:31:02.075226   54766 provision.go:84] configureAuth start
	I0429 00:31:02.075238   54766 main.go:141] libmachine: (multinode-061470) Calling .GetMachineName
	I0429 00:31:02.075506   54766 main.go:141] libmachine: (multinode-061470) Calling .GetIP
	I0429 00:31:02.078155   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.078539   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:02.078563   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.078696   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:02.080959   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.081287   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:02.081322   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.081405   54766 provision.go:143] copyHostCerts
	I0429 00:31:02.081433   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:31:02.081464   54766 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0429 00:31:02.081473   54766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:31:02.081553   54766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0429 00:31:02.081651   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:31:02.081679   54766 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0429 00:31:02.081689   54766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:31:02.081733   54766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0429 00:31:02.081789   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:31:02.081812   54766 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0429 00:31:02.081821   54766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:31:02.081850   54766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0429 00:31:02.081910   54766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.multinode-061470 san=[127.0.0.1 192.168.39.59 localhost minikube multinode-061470]
	I0429 00:31:02.258265   54766 provision.go:177] copyRemoteCerts
	I0429 00:31:02.258319   54766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 00:31:02.258341   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:02.260787   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.261136   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:02.261159   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.261349   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:31:02.261533   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:02.261688   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:31:02.261823   54766 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/multinode-061470/id_rsa Username:docker}
	I0429 00:31:02.351320   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 00:31:02.351408   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 00:31:02.379432   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 00:31:02.379507   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 00:31:02.420034   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 00:31:02.420109   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 00:31:02.449866   54766 provision.go:87] duration metric: took 374.62986ms to configureAuth
	I0429 00:31:02.449891   54766 buildroot.go:189] setting minikube options for container-runtime
	I0429 00:31:02.450122   54766 config.go:182] Loaded profile config "multinode-061470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:31:02.450199   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:31:02.452768   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.453100   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:31:02.453127   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:31:02.453334   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:31:02.453536   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:02.453693   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:31:02.453839   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:31:02.453997   54766 main.go:141] libmachine: Using SSH client type: native
	I0429 00:31:02.454171   54766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0429 00:31:02.454186   54766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 00:32:33.174713   54766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 00:32:33.174748   54766 machine.go:97] duration metric: took 1m31.469628491s to provisionDockerMachine
	I0429 00:32:33.174762   54766 start.go:293] postStartSetup for "multinode-061470" (driver="kvm2")
	I0429 00:32:33.174779   54766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 00:32:33.174801   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:32:33.175145   54766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 00:32:33.175172   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:32:33.178406   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.178829   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:33.178857   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.178996   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:32:33.179168   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:32:33.179338   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:32:33.179493   54766 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/multinode-061470/id_rsa Username:docker}
	I0429 00:32:33.266766   54766 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 00:32:33.271919   54766 command_runner.go:130] > NAME=Buildroot
	I0429 00:32:33.271943   54766 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 00:32:33.271949   54766 command_runner.go:130] > ID=buildroot
	I0429 00:32:33.271956   54766 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 00:32:33.271961   54766 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 00:32:33.272010   54766 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 00:32:33.272025   54766 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0429 00:32:33.272081   54766 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0429 00:32:33.272147   54766 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0429 00:32:33.272156   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /etc/ssl/certs/207272.pem
	I0429 00:32:33.272242   54766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 00:32:33.282854   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:32:33.312519   54766 start.go:296] duration metric: took 137.742592ms for postStartSetup
	I0429 00:32:33.312555   54766 fix.go:56] duration metric: took 1m31.630319518s for fixHost
	I0429 00:32:33.312574   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:32:33.315078   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.315424   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:33.315455   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.315622   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:32:33.315815   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:32:33.315973   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:32:33.316117   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:32:33.316336   54766 main.go:141] libmachine: Using SSH client type: native
	I0429 00:32:33.316488   54766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0429 00:32:33.316498   54766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 00:32:33.427591   54766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714350753.398655167
	
	I0429 00:32:33.427613   54766 fix.go:216] guest clock: 1714350753.398655167
	I0429 00:32:33.427632   54766 fix.go:229] Guest: 2024-04-29 00:32:33.398655167 +0000 UTC Remote: 2024-04-29 00:32:33.312559236 +0000 UTC m=+91.769784437 (delta=86.095931ms)
	I0429 00:32:33.427650   54766 fix.go:200] guest clock delta is within tolerance: 86.095931ms
	I0429 00:32:33.427656   54766 start.go:83] releasing machines lock for "multinode-061470", held for 1m31.745433671s
	I0429 00:32:33.427674   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:32:33.427920   54766 main.go:141] libmachine: (multinode-061470) Calling .GetIP
	I0429 00:32:33.430595   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.430941   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:33.430963   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.431149   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:32:33.431616   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:32:33.431781   54766 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:32:33.431869   54766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 00:32:33.431900   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:32:33.432009   54766 ssh_runner.go:195] Run: cat /version.json
	I0429 00:32:33.432034   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:32:33.434424   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.434708   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:33.434739   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.434759   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.434868   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:32:33.435048   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:32:33.435203   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:32:33.435272   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:33.435299   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:33.435346   54766 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/multinode-061470/id_rsa Username:docker}
	I0429 00:32:33.435482   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:32:33.435634   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:32:33.435788   54766 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:32:33.435936   54766 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/multinode-061470/id_rsa Username:docker}
	I0429 00:32:33.536885   54766 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 00:32:33.536945   54766 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 00:32:33.537082   54766 ssh_runner.go:195] Run: systemctl --version
	I0429 00:32:33.543779   54766 command_runner.go:130] > systemd 252 (252)
	I0429 00:32:33.543826   54766 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 00:32:33.543888   54766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 00:32:33.707703   54766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 00:32:33.727207   54766 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 00:32:33.727659   54766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 00:32:33.727734   54766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 00:32:33.737965   54766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 00:32:33.737996   54766 start.go:494] detecting cgroup driver to use...
	I0429 00:32:33.738079   54766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 00:32:33.756027   54766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 00:32:33.771637   54766 docker.go:217] disabling cri-docker service (if available) ...
	I0429 00:32:33.771696   54766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 00:32:33.786521   54766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 00:32:33.800539   54766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 00:32:33.951688   54766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 00:32:34.104465   54766 docker.go:233] disabling docker service ...
	I0429 00:32:34.104541   54766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 00:32:34.122794   54766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 00:32:34.137524   54766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 00:32:34.283875   54766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 00:32:34.431476   54766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 00:32:34.447054   54766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 00:32:34.472752   54766 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0429 00:32:34.473139   54766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 00:32:34.473194   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.484939   54766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 00:32:34.485017   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.496722   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.508587   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.521346   54766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 00:32:34.534344   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.546332   54766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.559330   54766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:32:34.571857   54766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 00:32:34.581768   54766 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 00:32:34.581929   54766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 00:32:34.591744   54766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:32:34.738618   54766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 00:32:34.998678   54766 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 00:32:34.998757   54766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 00:32:35.004613   54766 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0429 00:32:35.004640   54766 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 00:32:35.004650   54766 command_runner.go:130] > Device: 0,22	Inode: 1336        Links: 1
	I0429 00:32:35.004661   54766 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 00:32:35.004669   54766 command_runner.go:130] > Access: 2024-04-29 00:32:34.856129831 +0000
	I0429 00:32:35.004678   54766 command_runner.go:130] > Modify: 2024-04-29 00:32:34.856129831 +0000
	I0429 00:32:35.004686   54766 command_runner.go:130] > Change: 2024-04-29 00:32:34.856129831 +0000
	I0429 00:32:35.004706   54766 command_runner.go:130] >  Birth: -
	I0429 00:32:35.004725   54766 start.go:562] Will wait 60s for crictl version
	I0429 00:32:35.004767   54766 ssh_runner.go:195] Run: which crictl
	I0429 00:32:35.008932   54766 command_runner.go:130] > /usr/bin/crictl
	I0429 00:32:35.009170   54766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 00:32:35.048789   54766 command_runner.go:130] > Version:  0.1.0
	I0429 00:32:35.048812   54766 command_runner.go:130] > RuntimeName:  cri-o
	I0429 00:32:35.048817   54766 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0429 00:32:35.048821   54766 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 00:32:35.049990   54766 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 00:32:35.050494   54766 ssh_runner.go:195] Run: crio --version
	I0429 00:32:35.088087   54766 command_runner.go:130] > crio version 1.29.1
	I0429 00:32:35.088107   54766 command_runner.go:130] > Version:        1.29.1
	I0429 00:32:35.088113   54766 command_runner.go:130] > GitCommit:      unknown
	I0429 00:32:35.088117   54766 command_runner.go:130] > GitCommitDate:  unknown
	I0429 00:32:35.088121   54766 command_runner.go:130] > GitTreeState:   clean
	I0429 00:32:35.088128   54766 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 00:32:35.088133   54766 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 00:32:35.088136   54766 command_runner.go:130] > Compiler:       gc
	I0429 00:32:35.088141   54766 command_runner.go:130] > Platform:       linux/amd64
	I0429 00:32:35.088145   54766 command_runner.go:130] > Linkmode:       dynamic
	I0429 00:32:35.088149   54766 command_runner.go:130] > BuildTags:      
	I0429 00:32:35.088153   54766 command_runner.go:130] >   containers_image_ostree_stub
	I0429 00:32:35.088158   54766 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 00:32:35.088161   54766 command_runner.go:130] >   btrfs_noversion
	I0429 00:32:35.088166   54766 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 00:32:35.088170   54766 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 00:32:35.088174   54766 command_runner.go:130] >   seccomp
	I0429 00:32:35.088179   54766 command_runner.go:130] > LDFlags:          unknown
	I0429 00:32:35.088183   54766 command_runner.go:130] > SeccompEnabled:   true
	I0429 00:32:35.088187   54766 command_runner.go:130] > AppArmorEnabled:  false
	I0429 00:32:35.089653   54766 ssh_runner.go:195] Run: crio --version
	I0429 00:32:35.129757   54766 command_runner.go:130] > crio version 1.29.1
	I0429 00:32:35.129780   54766 command_runner.go:130] > Version:        1.29.1
	I0429 00:32:35.129790   54766 command_runner.go:130] > GitCommit:      unknown
	I0429 00:32:35.129828   54766 command_runner.go:130] > GitCommitDate:  unknown
	I0429 00:32:35.129843   54766 command_runner.go:130] > GitTreeState:   clean
	I0429 00:32:35.129854   54766 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 00:32:35.129861   54766 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 00:32:35.129865   54766 command_runner.go:130] > Compiler:       gc
	I0429 00:32:35.129872   54766 command_runner.go:130] > Platform:       linux/amd64
	I0429 00:32:35.129876   54766 command_runner.go:130] > Linkmode:       dynamic
	I0429 00:32:35.129883   54766 command_runner.go:130] > BuildTags:      
	I0429 00:32:35.129887   54766 command_runner.go:130] >   containers_image_ostree_stub
	I0429 00:32:35.129892   54766 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 00:32:35.129895   54766 command_runner.go:130] >   btrfs_noversion
	I0429 00:32:35.129903   54766 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 00:32:35.129909   54766 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 00:32:35.129916   54766 command_runner.go:130] >   seccomp
	I0429 00:32:35.129923   54766 command_runner.go:130] > LDFlags:          unknown
	I0429 00:32:35.129931   54766 command_runner.go:130] > SeccompEnabled:   true
	I0429 00:32:35.129939   54766 command_runner.go:130] > AppArmorEnabled:  false
	I0429 00:32:35.132181   54766 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 00:32:35.133751   54766 main.go:141] libmachine: (multinode-061470) Calling .GetIP
	I0429 00:32:35.136332   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:35.136701   54766 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:32:35.136736   54766 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:32:35.136923   54766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 00:32:35.141820   54766 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0429 00:32:35.142059   54766 kubeadm.go:877] updating cluster {Name:multinode-061470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-061470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.153 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 00:32:35.142174   54766 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:32:35.142214   54766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:32:35.191177   54766 command_runner.go:130] > {
	I0429 00:32:35.191200   54766 command_runner.go:130] >   "images": [
	I0429 00:32:35.191204   54766 command_runner.go:130] >     {
	I0429 00:32:35.191211   54766 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 00:32:35.191217   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191222   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 00:32:35.191226   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191230   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191240   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 00:32:35.191249   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 00:32:35.191259   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191264   54766 command_runner.go:130] >       "size": "65291810",
	I0429 00:32:35.191268   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.191272   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191278   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191282   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191286   54766 command_runner.go:130] >     },
	I0429 00:32:35.191290   54766 command_runner.go:130] >     {
	I0429 00:32:35.191298   54766 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 00:32:35.191303   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191311   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 00:32:35.191314   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191321   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191328   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 00:32:35.191337   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 00:32:35.191340   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191345   54766 command_runner.go:130] >       "size": "1363676",
	I0429 00:32:35.191348   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.191355   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191362   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191365   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191369   54766 command_runner.go:130] >     },
	I0429 00:32:35.191372   54766 command_runner.go:130] >     {
	I0429 00:32:35.191378   54766 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 00:32:35.191383   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191389   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 00:32:35.191395   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191399   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191408   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 00:32:35.191418   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 00:32:35.191421   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191426   54766 command_runner.go:130] >       "size": "31470524",
	I0429 00:32:35.191431   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.191443   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191450   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191454   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191460   54766 command_runner.go:130] >     },
	I0429 00:32:35.191463   54766 command_runner.go:130] >     {
	I0429 00:32:35.191469   54766 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 00:32:35.191475   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191480   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 00:32:35.191486   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191490   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191497   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 00:32:35.191511   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 00:32:35.191517   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191521   54766 command_runner.go:130] >       "size": "61245718",
	I0429 00:32:35.191525   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.191529   54766 command_runner.go:130] >       "username": "nonroot",
	I0429 00:32:35.191536   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191540   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191546   54766 command_runner.go:130] >     },
	I0429 00:32:35.191549   54766 command_runner.go:130] >     {
	I0429 00:32:35.191555   54766 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 00:32:35.191561   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191566   54766 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 00:32:35.191572   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191576   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191585   54766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 00:32:35.191594   54766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 00:32:35.191600   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191604   54766 command_runner.go:130] >       "size": "150779692",
	I0429 00:32:35.191610   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.191614   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.191620   54766 command_runner.go:130] >       },
	I0429 00:32:35.191624   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191630   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191634   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191640   54766 command_runner.go:130] >     },
	I0429 00:32:35.191648   54766 command_runner.go:130] >     {
	I0429 00:32:35.191657   54766 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 00:32:35.191663   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191668   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 00:32:35.191674   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191678   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191687   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 00:32:35.191697   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 00:32:35.191703   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191708   54766 command_runner.go:130] >       "size": "117609952",
	I0429 00:32:35.191714   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.191724   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.191730   54766 command_runner.go:130] >       },
	I0429 00:32:35.191734   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191738   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191744   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191748   54766 command_runner.go:130] >     },
	I0429 00:32:35.191754   54766 command_runner.go:130] >     {
	I0429 00:32:35.191760   54766 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 00:32:35.191766   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191771   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 00:32:35.191777   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191782   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191791   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 00:32:35.191800   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 00:32:35.191806   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191811   54766 command_runner.go:130] >       "size": "112170310",
	I0429 00:32:35.191817   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.191821   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.191826   54766 command_runner.go:130] >       },
	I0429 00:32:35.191830   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191836   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191840   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191846   54766 command_runner.go:130] >     },
	I0429 00:32:35.191849   54766 command_runner.go:130] >     {
	I0429 00:32:35.191858   54766 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 00:32:35.191867   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191874   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 00:32:35.191878   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191882   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191905   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 00:32:35.191916   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 00:32:35.191919   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191923   54766 command_runner.go:130] >       "size": "85932953",
	I0429 00:32:35.191926   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.191929   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.191933   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.191937   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.191940   54766 command_runner.go:130] >     },
	I0429 00:32:35.191943   54766 command_runner.go:130] >     {
	I0429 00:32:35.191949   54766 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 00:32:35.191952   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.191957   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 00:32:35.191960   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191964   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.191971   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 00:32:35.191978   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 00:32:35.191982   54766 command_runner.go:130] >       ],
	I0429 00:32:35.191988   54766 command_runner.go:130] >       "size": "63026502",
	I0429 00:32:35.191992   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.191999   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.192002   54766 command_runner.go:130] >       },
	I0429 00:32:35.192006   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.192010   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.192016   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.192020   54766 command_runner.go:130] >     },
	I0429 00:32:35.192026   54766 command_runner.go:130] >     {
	I0429 00:32:35.192031   54766 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 00:32:35.192038   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.192043   54766 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 00:32:35.192049   54766 command_runner.go:130] >       ],
	I0429 00:32:35.192053   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.192080   54766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 00:32:35.192093   54766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 00:32:35.192100   54766 command_runner.go:130] >       ],
	I0429 00:32:35.192107   54766 command_runner.go:130] >       "size": "750414",
	I0429 00:32:35.192111   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.192117   54766 command_runner.go:130] >         "value": "65535"
	I0429 00:32:35.192121   54766 command_runner.go:130] >       },
	I0429 00:32:35.192127   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.192132   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.192138   54766 command_runner.go:130] >       "pinned": true
	I0429 00:32:35.192141   54766 command_runner.go:130] >     }
	I0429 00:32:35.192147   54766 command_runner.go:130] >   ]
	I0429 00:32:35.192150   54766 command_runner.go:130] > }
	I0429 00:32:35.193008   54766 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:32:35.193022   54766 crio.go:433] Images already preloaded, skipping extraction
	I0429 00:32:35.193065   54766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:32:35.228254   54766 command_runner.go:130] > {
	I0429 00:32:35.228283   54766 command_runner.go:130] >   "images": [
	I0429 00:32:35.228289   54766 command_runner.go:130] >     {
	I0429 00:32:35.228301   54766 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 00:32:35.228310   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228318   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 00:32:35.228326   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228332   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228346   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 00:32:35.228362   54766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 00:32:35.228368   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228378   54766 command_runner.go:130] >       "size": "65291810",
	I0429 00:32:35.228384   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.228392   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228414   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228424   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228428   54766 command_runner.go:130] >     },
	I0429 00:32:35.228431   54766 command_runner.go:130] >     {
	I0429 00:32:35.228437   54766 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 00:32:35.228444   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228449   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 00:32:35.228452   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228456   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228463   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 00:32:35.228471   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 00:32:35.228475   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228478   54766 command_runner.go:130] >       "size": "1363676",
	I0429 00:32:35.228494   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.228501   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228505   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228509   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228512   54766 command_runner.go:130] >     },
	I0429 00:32:35.228516   54766 command_runner.go:130] >     {
	I0429 00:32:35.228525   54766 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 00:32:35.228529   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228534   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 00:32:35.228540   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228544   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228554   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 00:32:35.228564   54766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 00:32:35.228570   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228575   54766 command_runner.go:130] >       "size": "31470524",
	I0429 00:32:35.228579   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.228583   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228589   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228593   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228599   54766 command_runner.go:130] >     },
	I0429 00:32:35.228603   54766 command_runner.go:130] >     {
	I0429 00:32:35.228609   54766 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 00:32:35.228615   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228620   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 00:32:35.228626   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228630   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228638   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 00:32:35.228652   54766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 00:32:35.228656   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228661   54766 command_runner.go:130] >       "size": "61245718",
	I0429 00:32:35.228665   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.228672   54766 command_runner.go:130] >       "username": "nonroot",
	I0429 00:32:35.228679   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228685   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228688   54766 command_runner.go:130] >     },
	I0429 00:32:35.228692   54766 command_runner.go:130] >     {
	I0429 00:32:35.228699   54766 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 00:32:35.228707   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228714   54766 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 00:32:35.228736   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228747   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228758   54766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 00:32:35.228772   54766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 00:32:35.228778   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228782   54766 command_runner.go:130] >       "size": "150779692",
	I0429 00:32:35.228786   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.228790   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.228796   54766 command_runner.go:130] >       },
	I0429 00:32:35.228800   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228804   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228809   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228815   54766 command_runner.go:130] >     },
	I0429 00:32:35.228818   54766 command_runner.go:130] >     {
	I0429 00:32:35.228824   54766 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 00:32:35.228831   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228836   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 00:32:35.228842   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228846   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228855   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 00:32:35.228863   54766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 00:32:35.228869   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228873   54766 command_runner.go:130] >       "size": "117609952",
	I0429 00:32:35.228876   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.228880   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.228884   54766 command_runner.go:130] >       },
	I0429 00:32:35.228888   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228894   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228898   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228904   54766 command_runner.go:130] >     },
	I0429 00:32:35.228907   54766 command_runner.go:130] >     {
	I0429 00:32:35.228913   54766 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 00:32:35.228919   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.228927   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 00:32:35.228932   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228936   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.228944   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 00:32:35.228954   54766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 00:32:35.228960   54766 command_runner.go:130] >       ],
	I0429 00:32:35.228967   54766 command_runner.go:130] >       "size": "112170310",
	I0429 00:32:35.228971   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.228974   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.228978   54766 command_runner.go:130] >       },
	I0429 00:32:35.228982   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.228985   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.228989   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.228993   54766 command_runner.go:130] >     },
	I0429 00:32:35.228996   54766 command_runner.go:130] >     {
	I0429 00:32:35.229002   54766 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 00:32:35.229008   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.229013   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 00:32:35.229019   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229023   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.229039   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 00:32:35.229049   54766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 00:32:35.229052   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229059   54766 command_runner.go:130] >       "size": "85932953",
	I0429 00:32:35.229063   54766 command_runner.go:130] >       "uid": null,
	I0429 00:32:35.229069   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.229072   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.229076   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.229080   54766 command_runner.go:130] >     },
	I0429 00:32:35.229083   54766 command_runner.go:130] >     {
	I0429 00:32:35.229089   54766 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 00:32:35.229093   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.229098   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 00:32:35.229104   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229108   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.229118   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 00:32:35.229126   54766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 00:32:35.229132   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229136   54766 command_runner.go:130] >       "size": "63026502",
	I0429 00:32:35.229140   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.229143   54766 command_runner.go:130] >         "value": "0"
	I0429 00:32:35.229147   54766 command_runner.go:130] >       },
	I0429 00:32:35.229151   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.229157   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.229161   54766 command_runner.go:130] >       "pinned": false
	I0429 00:32:35.229165   54766 command_runner.go:130] >     },
	I0429 00:32:35.229168   54766 command_runner.go:130] >     {
	I0429 00:32:35.229177   54766 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 00:32:35.229181   54766 command_runner.go:130] >       "repoTags": [
	I0429 00:32:35.229186   54766 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 00:32:35.229191   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229195   54766 command_runner.go:130] >       "repoDigests": [
	I0429 00:32:35.229202   54766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 00:32:35.229213   54766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 00:32:35.229216   54766 command_runner.go:130] >       ],
	I0429 00:32:35.229220   54766 command_runner.go:130] >       "size": "750414",
	I0429 00:32:35.229229   54766 command_runner.go:130] >       "uid": {
	I0429 00:32:35.229235   54766 command_runner.go:130] >         "value": "65535"
	I0429 00:32:35.229239   54766 command_runner.go:130] >       },
	I0429 00:32:35.229249   54766 command_runner.go:130] >       "username": "",
	I0429 00:32:35.229256   54766 command_runner.go:130] >       "spec": null,
	I0429 00:32:35.229265   54766 command_runner.go:130] >       "pinned": true
	I0429 00:32:35.229270   54766 command_runner.go:130] >     }
	I0429 00:32:35.229278   54766 command_runner.go:130] >   ]
	I0429 00:32:35.229283   54766 command_runner.go:130] > }
	I0429 00:32:35.229810   54766 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:32:35.229827   54766 cache_images.go:84] Images are preloaded, skipping loading
	I0429 00:32:35.229835   54766 kubeadm.go:928] updating node { 192.168.39.59 8443 v1.30.0 crio true true} ...
	I0429 00:32:35.229939   54766 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-061470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-061470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 00:32:35.229997   54766 ssh_runner.go:195] Run: crio config
	I0429 00:32:35.276280   54766 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0429 00:32:35.276308   54766 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0429 00:32:35.276319   54766 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0429 00:32:35.276326   54766 command_runner.go:130] > #
	I0429 00:32:35.276335   54766 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0429 00:32:35.276345   54766 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0429 00:32:35.276358   54766 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0429 00:32:35.276385   54766 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0429 00:32:35.276397   54766 command_runner.go:130] > # reload'.
	I0429 00:32:35.276407   54766 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0429 00:32:35.276418   54766 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0429 00:32:35.276431   54766 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0429 00:32:35.276440   54766 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0429 00:32:35.276452   54766 command_runner.go:130] > [crio]
	I0429 00:32:35.276463   54766 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0429 00:32:35.276474   54766 command_runner.go:130] > # containers images, in this directory.
	I0429 00:32:35.276482   54766 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0429 00:32:35.276508   54766 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0429 00:32:35.276522   54766 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0429 00:32:35.276534   54766 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0429 00:32:35.276542   54766 command_runner.go:130] > # imagestore = ""
	I0429 00:32:35.276557   54766 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0429 00:32:35.276571   54766 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0429 00:32:35.276581   54766 command_runner.go:130] > storage_driver = "overlay"
	I0429 00:32:35.276591   54766 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0429 00:32:35.276604   54766 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0429 00:32:35.276614   54766 command_runner.go:130] > storage_option = [
	I0429 00:32:35.276622   54766 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0429 00:32:35.276630   54766 command_runner.go:130] > ]
	I0429 00:32:35.276642   54766 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0429 00:32:35.276656   54766 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0429 00:32:35.276667   54766 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0429 00:32:35.276680   54766 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0429 00:32:35.276693   54766 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0429 00:32:35.276700   54766 command_runner.go:130] > # always happen on a node reboot
	I0429 00:32:35.276712   54766 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0429 00:32:35.276729   54766 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0429 00:32:35.276743   54766 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0429 00:32:35.276755   54766 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0429 00:32:35.276767   54766 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0429 00:32:35.276782   54766 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0429 00:32:35.276799   54766 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0429 00:32:35.276808   54766 command_runner.go:130] > # internal_wipe = true
	I0429 00:32:35.276822   54766 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0429 00:32:35.276834   54766 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0429 00:32:35.276845   54766 command_runner.go:130] > # internal_repair = false
	I0429 00:32:35.276857   54766 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0429 00:32:35.276870   54766 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0429 00:32:35.276883   54766 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0429 00:32:35.276899   54766 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0429 00:32:35.276913   54766 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0429 00:32:35.276921   54766 command_runner.go:130] > [crio.api]
	I0429 00:32:35.276933   54766 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0429 00:32:35.276945   54766 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0429 00:32:35.276961   54766 command_runner.go:130] > # IP address on which the stream server will listen.
	I0429 00:32:35.276972   54766 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0429 00:32:35.276986   54766 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0429 00:32:35.276995   54766 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0429 00:32:35.277005   54766 command_runner.go:130] > # stream_port = "0"
	I0429 00:32:35.277016   54766 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0429 00:32:35.277027   54766 command_runner.go:130] > # stream_enable_tls = false
	I0429 00:32:35.277040   54766 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0429 00:32:35.277050   54766 command_runner.go:130] > # stream_idle_timeout = ""
	I0429 00:32:35.277064   54766 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0429 00:32:35.277078   54766 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0429 00:32:35.277087   54766 command_runner.go:130] > # minutes.
	I0429 00:32:35.277094   54766 command_runner.go:130] > # stream_tls_cert = ""
	I0429 00:32:35.277105   54766 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0429 00:32:35.277118   54766 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0429 00:32:35.277128   54766 command_runner.go:130] > # stream_tls_key = ""
	I0429 00:32:35.277139   54766 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0429 00:32:35.277153   54766 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0429 00:32:35.277172   54766 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0429 00:32:35.277183   54766 command_runner.go:130] > # stream_tls_ca = ""
	I0429 00:32:35.277199   54766 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 00:32:35.277211   54766 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0429 00:32:35.277226   54766 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 00:32:35.277240   54766 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0429 00:32:35.277253   54766 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0429 00:32:35.277266   54766 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0429 00:32:35.277275   54766 command_runner.go:130] > [crio.runtime]
	I0429 00:32:35.277286   54766 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0429 00:32:35.277298   54766 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0429 00:32:35.277308   54766 command_runner.go:130] > # "nofile=1024:2048"
	I0429 00:32:35.277319   54766 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0429 00:32:35.277331   54766 command_runner.go:130] > # default_ulimits = [
	I0429 00:32:35.277336   54766 command_runner.go:130] > # ]
	I0429 00:32:35.277347   54766 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0429 00:32:35.277357   54766 command_runner.go:130] > # no_pivot = false
	I0429 00:32:35.277368   54766 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0429 00:32:35.277381   54766 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0429 00:32:35.277393   54766 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0429 00:32:35.277407   54766 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0429 00:32:35.277419   54766 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0429 00:32:35.277433   54766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 00:32:35.277443   54766 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0429 00:32:35.277450   54766 command_runner.go:130] > # Cgroup setting for conmon
	I0429 00:32:35.277465   54766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0429 00:32:35.277475   54766 command_runner.go:130] > conmon_cgroup = "pod"
	I0429 00:32:35.277487   54766 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0429 00:32:35.277499   54766 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0429 00:32:35.277513   54766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 00:32:35.277522   54766 command_runner.go:130] > conmon_env = [
	I0429 00:32:35.277535   54766 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 00:32:35.277543   54766 command_runner.go:130] > ]
	I0429 00:32:35.277552   54766 command_runner.go:130] > # Additional environment variables to set for all the
	I0429 00:32:35.277564   54766 command_runner.go:130] > # containers. These are overridden if set in the
	I0429 00:32:35.277577   54766 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0429 00:32:35.277587   54766 command_runner.go:130] > # default_env = [
	I0429 00:32:35.277596   54766 command_runner.go:130] > # ]
	I0429 00:32:35.277606   54766 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0429 00:32:35.277622   54766 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0429 00:32:35.277631   54766 command_runner.go:130] > # selinux = false
	I0429 00:32:35.277642   54766 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0429 00:32:35.277656   54766 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0429 00:32:35.277669   54766 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0429 00:32:35.277679   54766 command_runner.go:130] > # seccomp_profile = ""
	I0429 00:32:35.277690   54766 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0429 00:32:35.277703   54766 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0429 00:32:35.277716   54766 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0429 00:32:35.277726   54766 command_runner.go:130] > # which might increase security.
	I0429 00:32:35.277740   54766 command_runner.go:130] > # This option is currently deprecated,
	I0429 00:32:35.277749   54766 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0429 00:32:35.277760   54766 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0429 00:32:35.277775   54766 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0429 00:32:35.277788   54766 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0429 00:32:35.277801   54766 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0429 00:32:35.277813   54766 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0429 00:32:35.277824   54766 command_runner.go:130] > # This option supports live configuration reload.
	I0429 00:32:35.277837   54766 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0429 00:32:35.277850   54766 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0429 00:32:35.277861   54766 command_runner.go:130] > # the cgroup blockio controller.
	I0429 00:32:35.277871   54766 command_runner.go:130] > # blockio_config_file = ""
	I0429 00:32:35.277884   54766 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0429 00:32:35.277894   54766 command_runner.go:130] > # blockio parameters.
	I0429 00:32:35.277901   54766 command_runner.go:130] > # blockio_reload = false
	I0429 00:32:35.277915   54766 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0429 00:32:35.277925   54766 command_runner.go:130] > # irqbalance daemon.
	I0429 00:32:35.277937   54766 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0429 00:32:35.277950   54766 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0429 00:32:35.277965   54766 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0429 00:32:35.277979   54766 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0429 00:32:35.277991   54766 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0429 00:32:35.278003   54766 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0429 00:32:35.278015   54766 command_runner.go:130] > # This option supports live configuration reload.
	I0429 00:32:35.278034   54766 command_runner.go:130] > # rdt_config_file = ""
	I0429 00:32:35.278044   54766 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0429 00:32:35.278055   54766 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0429 00:32:35.278079   54766 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0429 00:32:35.278089   54766 command_runner.go:130] > # separate_pull_cgroup = ""
	I0429 00:32:35.278102   54766 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0429 00:32:35.278115   54766 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0429 00:32:35.278125   54766 command_runner.go:130] > # will be added.
	I0429 00:32:35.278134   54766 command_runner.go:130] > # default_capabilities = [
	I0429 00:32:35.278142   54766 command_runner.go:130] > # 	"CHOWN",
	I0429 00:32:35.278148   54766 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0429 00:32:35.278158   54766 command_runner.go:130] > # 	"FSETID",
	I0429 00:32:35.278166   54766 command_runner.go:130] > # 	"FOWNER",
	I0429 00:32:35.278175   54766 command_runner.go:130] > # 	"SETGID",
	I0429 00:32:35.278182   54766 command_runner.go:130] > # 	"SETUID",
	I0429 00:32:35.278191   54766 command_runner.go:130] > # 	"SETPCAP",
	I0429 00:32:35.278198   54766 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0429 00:32:35.278207   54766 command_runner.go:130] > # 	"KILL",
	I0429 00:32:35.278214   54766 command_runner.go:130] > # ]
	I0429 00:32:35.278235   54766 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0429 00:32:35.278249   54766 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0429 00:32:35.278259   54766 command_runner.go:130] > # add_inheritable_capabilities = false
	I0429 00:32:35.278271   54766 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0429 00:32:35.278284   54766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 00:32:35.278293   54766 command_runner.go:130] > default_sysctls = [
	I0429 00:32:35.278306   54766 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0429 00:32:35.278314   54766 command_runner.go:130] > ]
	I0429 00:32:35.278323   54766 command_runner.go:130] > # List of devices on the host that a
	I0429 00:32:35.278337   54766 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0429 00:32:35.278347   54766 command_runner.go:130] > # allowed_devices = [
	I0429 00:32:35.278354   54766 command_runner.go:130] > # 	"/dev/fuse",
	I0429 00:32:35.278361   54766 command_runner.go:130] > # ]
	I0429 00:32:35.278370   54766 command_runner.go:130] > # List of additional devices. specified as
	I0429 00:32:35.278386   54766 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0429 00:32:35.278397   54766 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0429 00:32:35.278407   54766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 00:32:35.278417   54766 command_runner.go:130] > # additional_devices = [
	I0429 00:32:35.278425   54766 command_runner.go:130] > # ]
	I0429 00:32:35.278437   54766 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0429 00:32:35.278444   54766 command_runner.go:130] > # cdi_spec_dirs = [
	I0429 00:32:35.278451   54766 command_runner.go:130] > # 	"/etc/cdi",
	I0429 00:32:35.278460   54766 command_runner.go:130] > # 	"/var/run/cdi",
	I0429 00:32:35.278466   54766 command_runner.go:130] > # ]
	I0429 00:32:35.278480   54766 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0429 00:32:35.278493   54766 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0429 00:32:35.278502   54766 command_runner.go:130] > # Defaults to false.
	I0429 00:32:35.278513   54766 command_runner.go:130] > # device_ownership_from_security_context = false
	I0429 00:32:35.278525   54766 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0429 00:32:35.278539   54766 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0429 00:32:35.278548   54766 command_runner.go:130] > # hooks_dir = [
	I0429 00:32:35.278559   54766 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0429 00:32:35.278568   54766 command_runner.go:130] > # ]
	I0429 00:32:35.278579   54766 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0429 00:32:35.278592   54766 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0429 00:32:35.278605   54766 command_runner.go:130] > # its default mounts from the following two files:
	I0429 00:32:35.278609   54766 command_runner.go:130] > #
	I0429 00:32:35.278619   54766 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0429 00:32:35.278629   54766 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0429 00:32:35.278641   54766 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0429 00:32:35.278647   54766 command_runner.go:130] > #
	I0429 00:32:35.278659   54766 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0429 00:32:35.278673   54766 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0429 00:32:35.278686   54766 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0429 00:32:35.278696   54766 command_runner.go:130] > #      only add mounts it finds in this file.
	I0429 00:32:35.278702   54766 command_runner.go:130] > #
	I0429 00:32:35.278710   54766 command_runner.go:130] > # default_mounts_file = ""
	I0429 00:32:35.278723   54766 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0429 00:32:35.278740   54766 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0429 00:32:35.278749   54766 command_runner.go:130] > pids_limit = 1024
	I0429 00:32:35.278757   54766 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0429 00:32:35.278765   54766 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0429 00:32:35.278772   54766 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0429 00:32:35.278781   54766 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0429 00:32:35.278785   54766 command_runner.go:130] > # log_size_max = -1
	I0429 00:32:35.278791   54766 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0429 00:32:35.278796   54766 command_runner.go:130] > # log_to_journald = false
	I0429 00:32:35.278802   54766 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0429 00:32:35.278809   54766 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0429 00:32:35.278814   54766 command_runner.go:130] > # Path to directory for container attach sockets.
	I0429 00:32:35.278821   54766 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0429 00:32:35.278826   54766 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0429 00:32:35.278832   54766 command_runner.go:130] > # bind_mount_prefix = ""
	I0429 00:32:35.278837   54766 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0429 00:32:35.278842   54766 command_runner.go:130] > # read_only = false
	I0429 00:32:35.278849   54766 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0429 00:32:35.278862   54766 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0429 00:32:35.278871   54766 command_runner.go:130] > # live configuration reload.
	I0429 00:32:35.278877   54766 command_runner.go:130] > # log_level = "info"
	I0429 00:32:35.278888   54766 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0429 00:32:35.278899   54766 command_runner.go:130] > # This option supports live configuration reload.
	I0429 00:32:35.278909   54766 command_runner.go:130] > # log_filter = ""
	I0429 00:32:35.278919   54766 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0429 00:32:35.278932   54766 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0429 00:32:35.278942   54766 command_runner.go:130] > # separated by comma.
	I0429 00:32:35.278955   54766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 00:32:35.278964   54766 command_runner.go:130] > # uid_mappings = ""
	I0429 00:32:35.278973   54766 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0429 00:32:35.278982   54766 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0429 00:32:35.278986   54766 command_runner.go:130] > # separated by comma.
	I0429 00:32:35.278993   54766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 00:32:35.279000   54766 command_runner.go:130] > # gid_mappings = ""
	I0429 00:32:35.279006   54766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0429 00:32:35.279015   54766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 00:32:35.279026   54766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 00:32:35.279036   54766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 00:32:35.279042   54766 command_runner.go:130] > # minimum_mappable_uid = -1
	I0429 00:32:35.279048   54766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0429 00:32:35.279056   54766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 00:32:35.279064   54766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 00:32:35.279072   54766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 00:32:35.279078   54766 command_runner.go:130] > # minimum_mappable_gid = -1
	I0429 00:32:35.279084   54766 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0429 00:32:35.279092   54766 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0429 00:32:35.279100   54766 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0429 00:32:35.279104   54766 command_runner.go:130] > # ctr_stop_timeout = 30
	I0429 00:32:35.279111   54766 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0429 00:32:35.279120   54766 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0429 00:32:35.279127   54766 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0429 00:32:35.279132   54766 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0429 00:32:35.279138   54766 command_runner.go:130] > drop_infra_ctr = false
	I0429 00:32:35.279145   54766 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0429 00:32:35.279153   54766 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0429 00:32:35.279162   54766 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0429 00:32:35.279169   54766 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0429 00:32:35.279176   54766 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0429 00:32:35.279184   54766 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0429 00:32:35.279190   54766 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0429 00:32:35.279197   54766 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0429 00:32:35.279202   54766 command_runner.go:130] > # shared_cpuset = ""
	I0429 00:32:35.279209   54766 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0429 00:32:35.279216   54766 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0429 00:32:35.279220   54766 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0429 00:32:35.279232   54766 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0429 00:32:35.279239   54766 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0429 00:32:35.279244   54766 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0429 00:32:35.279252   54766 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0429 00:32:35.279257   54766 command_runner.go:130] > # enable_criu_support = false
	I0429 00:32:35.279264   54766 command_runner.go:130] > # Enable/disable the generation of the container,
	I0429 00:32:35.279272   54766 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0429 00:32:35.279279   54766 command_runner.go:130] > # enable_pod_events = false
	I0429 00:32:35.279285   54766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 00:32:35.279293   54766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 00:32:35.279300   54766 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0429 00:32:35.279304   54766 command_runner.go:130] > # default_runtime = "runc"
	I0429 00:32:35.279309   54766 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0429 00:32:35.279318   54766 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0429 00:32:35.279330   54766 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0429 00:32:35.279337   54766 command_runner.go:130] > # creation as a file is not desired either.
	I0429 00:32:35.279345   54766 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0429 00:32:35.279352   54766 command_runner.go:130] > # the hostname is being managed dynamically.
	I0429 00:32:35.279357   54766 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0429 00:32:35.279362   54766 command_runner.go:130] > # ]
	I0429 00:32:35.279368   54766 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0429 00:32:35.279377   54766 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0429 00:32:35.279385   54766 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0429 00:32:35.279392   54766 command_runner.go:130] > # Each entry in the table should follow the format:
	I0429 00:32:35.279400   54766 command_runner.go:130] > #
	I0429 00:32:35.279406   54766 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0429 00:32:35.279411   54766 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0429 00:32:35.279449   54766 command_runner.go:130] > # runtime_type = "oci"
	I0429 00:32:35.279456   54766 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0429 00:32:35.279461   54766 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0429 00:32:35.279467   54766 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0429 00:32:35.279472   54766 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0429 00:32:35.279478   54766 command_runner.go:130] > # monitor_env = []
	I0429 00:32:35.279483   54766 command_runner.go:130] > # privileged_without_host_devices = false
	I0429 00:32:35.279489   54766 command_runner.go:130] > # allowed_annotations = []
	I0429 00:32:35.279494   54766 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0429 00:32:35.279500   54766 command_runner.go:130] > # Where:
	I0429 00:32:35.279505   54766 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0429 00:32:35.279513   54766 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0429 00:32:35.279521   54766 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0429 00:32:35.279529   54766 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0429 00:32:35.279533   54766 command_runner.go:130] > #   in $PATH.
	I0429 00:32:35.279539   54766 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0429 00:32:35.279546   54766 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0429 00:32:35.279554   54766 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0429 00:32:35.279560   54766 command_runner.go:130] > #   state.
	I0429 00:32:35.279566   54766 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0429 00:32:35.279573   54766 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0429 00:32:35.279582   54766 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0429 00:32:35.279590   54766 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0429 00:32:35.279598   54766 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0429 00:32:35.279606   54766 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0429 00:32:35.279612   54766 command_runner.go:130] > #   The currently recognized values are:
	I0429 00:32:35.279618   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0429 00:32:35.279626   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0429 00:32:35.279634   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0429 00:32:35.279640   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0429 00:32:35.279650   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0429 00:32:35.279658   54766 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0429 00:32:35.279666   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0429 00:32:35.279676   54766 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0429 00:32:35.279684   54766 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0429 00:32:35.279692   54766 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0429 00:32:35.279698   54766 command_runner.go:130] > #   deprecated option "conmon".
	I0429 00:32:35.279705   54766 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0429 00:32:35.279713   54766 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0429 00:32:35.279721   54766 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0429 00:32:35.279727   54766 command_runner.go:130] > #   should be moved to the container's cgroup
	I0429 00:32:35.279736   54766 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0429 00:32:35.279744   54766 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0429 00:32:35.279752   54766 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0429 00:32:35.279759   54766 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0429 00:32:35.279762   54766 command_runner.go:130] > #
	I0429 00:32:35.279767   54766 command_runner.go:130] > # Using the seccomp notifier feature:
	I0429 00:32:35.279772   54766 command_runner.go:130] > #
	I0429 00:32:35.279778   54766 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0429 00:32:35.279786   54766 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0429 00:32:35.279791   54766 command_runner.go:130] > #
	I0429 00:32:35.279799   54766 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0429 00:32:35.279808   54766 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0429 00:32:35.279810   54766 command_runner.go:130] > #
	I0429 00:32:35.279819   54766 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0429 00:32:35.279824   54766 command_runner.go:130] > # feature.
	I0429 00:32:35.279830   54766 command_runner.go:130] > #
	I0429 00:32:35.279836   54766 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0429 00:32:35.279844   54766 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0429 00:32:35.279850   54766 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0429 00:32:35.279858   54766 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0429 00:32:35.279864   54766 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0429 00:32:35.279869   54766 command_runner.go:130] > #
	I0429 00:32:35.279875   54766 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0429 00:32:35.279883   54766 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0429 00:32:35.279887   54766 command_runner.go:130] > #
	I0429 00:32:35.279893   54766 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0429 00:32:35.279901   54766 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0429 00:32:35.279906   54766 command_runner.go:130] > #
	I0429 00:32:35.279917   54766 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0429 00:32:35.279925   54766 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0429 00:32:35.279931   54766 command_runner.go:130] > # limitation.
	I0429 00:32:35.279935   54766 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0429 00:32:35.279942   54766 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0429 00:32:35.279946   54766 command_runner.go:130] > runtime_type = "oci"
	I0429 00:32:35.279951   54766 command_runner.go:130] > runtime_root = "/run/runc"
	I0429 00:32:35.279954   54766 command_runner.go:130] > runtime_config_path = ""
	I0429 00:32:35.279961   54766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0429 00:32:35.279965   54766 command_runner.go:130] > monitor_cgroup = "pod"
	I0429 00:32:35.279969   54766 command_runner.go:130] > monitor_exec_cgroup = ""
	I0429 00:32:35.279973   54766 command_runner.go:130] > monitor_env = [
	I0429 00:32:35.279979   54766 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 00:32:35.279984   54766 command_runner.go:130] > ]
	I0429 00:32:35.279989   54766 command_runner.go:130] > privileged_without_host_devices = false
	I0429 00:32:35.279997   54766 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0429 00:32:35.280004   54766 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0429 00:32:35.280010   54766 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0429 00:32:35.280019   54766 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0429 00:32:35.280028   54766 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0429 00:32:35.280036   54766 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0429 00:32:35.280047   54766 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0429 00:32:35.280057   54766 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0429 00:32:35.280064   54766 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0429 00:32:35.280071   54766 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0429 00:32:35.280076   54766 command_runner.go:130] > # Example:
	I0429 00:32:35.280081   54766 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0429 00:32:35.280088   54766 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0429 00:32:35.280093   54766 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0429 00:32:35.280100   54766 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0429 00:32:35.280104   54766 command_runner.go:130] > # cpuset = 0
	I0429 00:32:35.280108   54766 command_runner.go:130] > # cpushares = "0-1"
	I0429 00:32:35.280112   54766 command_runner.go:130] > # Where:
	I0429 00:32:35.280119   54766 command_runner.go:130] > # The workload name is workload-type.
	I0429 00:32:35.280127   54766 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0429 00:32:35.280134   54766 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0429 00:32:35.280143   54766 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0429 00:32:35.280154   54766 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0429 00:32:35.280159   54766 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0429 00:32:35.280166   54766 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0429 00:32:35.280173   54766 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0429 00:32:35.280179   54766 command_runner.go:130] > # Default value is set to true
	I0429 00:32:35.280183   54766 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0429 00:32:35.280191   54766 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0429 00:32:35.280197   54766 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0429 00:32:35.280202   54766 command_runner.go:130] > # Default value is set to 'false'
	I0429 00:32:35.280207   54766 command_runner.go:130] > # disable_hostport_mapping = false
	I0429 00:32:35.280213   54766 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0429 00:32:35.280219   54766 command_runner.go:130] > #
	I0429 00:32:35.280225   54766 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0429 00:32:35.280235   54766 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0429 00:32:35.280244   54766 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0429 00:32:35.280250   54766 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0429 00:32:35.280255   54766 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0429 00:32:35.280258   54766 command_runner.go:130] > [crio.image]
	I0429 00:32:35.280263   54766 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0429 00:32:35.280267   54766 command_runner.go:130] > # default_transport = "docker://"
	I0429 00:32:35.280274   54766 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0429 00:32:35.280280   54766 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0429 00:32:35.280284   54766 command_runner.go:130] > # global_auth_file = ""
	I0429 00:32:35.280288   54766 command_runner.go:130] > # The image used to instantiate infra containers.
	I0429 00:32:35.280293   54766 command_runner.go:130] > # This option supports live configuration reload.
	I0429 00:32:35.280297   54766 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0429 00:32:35.280303   54766 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0429 00:32:35.280308   54766 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0429 00:32:35.280313   54766 command_runner.go:130] > # This option supports live configuration reload.
	I0429 00:32:35.280316   54766 command_runner.go:130] > # pause_image_auth_file = ""
	I0429 00:32:35.280322   54766 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0429 00:32:35.280327   54766 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0429 00:32:35.280332   54766 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0429 00:32:35.280338   54766 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0429 00:32:35.280341   54766 command_runner.go:130] > # pause_command = "/pause"
	I0429 00:32:35.280350   54766 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0429 00:32:35.280356   54766 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0429 00:32:35.280361   54766 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0429 00:32:35.280367   54766 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0429 00:32:35.280372   54766 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0429 00:32:35.280377   54766 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0429 00:32:35.280381   54766 command_runner.go:130] > # pinned_images = [
	I0429 00:32:35.280384   54766 command_runner.go:130] > # ]
	I0429 00:32:35.280389   54766 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0429 00:32:35.280395   54766 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0429 00:32:35.280400   54766 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0429 00:32:35.280406   54766 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0429 00:32:35.280410   54766 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0429 00:32:35.280414   54766 command_runner.go:130] > # signature_policy = ""
	I0429 00:32:35.280421   54766 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0429 00:32:35.280430   54766 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0429 00:32:35.280437   54766 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0429 00:32:35.280445   54766 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0429 00:32:35.280450   54766 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0429 00:32:35.280456   54766 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0429 00:32:35.280464   54766 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0429 00:32:35.280472   54766 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0429 00:32:35.280478   54766 command_runner.go:130] > # changing them here.
	I0429 00:32:35.280482   54766 command_runner.go:130] > # insecure_registries = [
	I0429 00:32:35.280485   54766 command_runner.go:130] > # ]
	I0429 00:32:35.280494   54766 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0429 00:32:35.280498   54766 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0429 00:32:35.280504   54766 command_runner.go:130] > # image_volumes = "mkdir"
	I0429 00:32:35.280510   54766 command_runner.go:130] > # Temporary directory to use for storing big files
	I0429 00:32:35.280516   54766 command_runner.go:130] > # big_files_temporary_dir = ""
	I0429 00:32:35.280524   54766 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0429 00:32:35.280533   54766 command_runner.go:130] > # CNI plugins.
	I0429 00:32:35.280541   54766 command_runner.go:130] > [crio.network]
	I0429 00:32:35.280554   54766 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0429 00:32:35.280565   54766 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0429 00:32:35.280569   54766 command_runner.go:130] > # cni_default_network = ""
	I0429 00:32:35.280581   54766 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0429 00:32:35.280588   54766 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0429 00:32:35.280594   54766 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0429 00:32:35.280600   54766 command_runner.go:130] > # plugin_dirs = [
	I0429 00:32:35.280603   54766 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0429 00:32:35.280607   54766 command_runner.go:130] > # ]
	I0429 00:32:35.280614   54766 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0429 00:32:35.280621   54766 command_runner.go:130] > [crio.metrics]
	I0429 00:32:35.280626   54766 command_runner.go:130] > # Globally enable or disable metrics support.
	I0429 00:32:35.280632   54766 command_runner.go:130] > enable_metrics = true
	I0429 00:32:35.280636   54766 command_runner.go:130] > # Specify enabled metrics collectors.
	I0429 00:32:35.280643   54766 command_runner.go:130] > # Per default all metrics are enabled.
	I0429 00:32:35.280650   54766 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0429 00:32:35.280658   54766 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0429 00:32:35.280666   54766 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0429 00:32:35.280675   54766 command_runner.go:130] > # metrics_collectors = [
	I0429 00:32:35.280684   54766 command_runner.go:130] > # 	"operations",
	I0429 00:32:35.280695   54766 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0429 00:32:35.280707   54766 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0429 00:32:35.280716   54766 command_runner.go:130] > # 	"operations_errors",
	I0429 00:32:35.280723   54766 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0429 00:32:35.280733   54766 command_runner.go:130] > # 	"image_pulls_by_name",
	I0429 00:32:35.280740   54766 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0429 00:32:35.280751   54766 command_runner.go:130] > # 	"image_pulls_failures",
	I0429 00:32:35.280757   54766 command_runner.go:130] > # 	"image_pulls_successes",
	I0429 00:32:35.280764   54766 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0429 00:32:35.280773   54766 command_runner.go:130] > # 	"image_layer_reuse",
	I0429 00:32:35.280781   54766 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0429 00:32:35.280794   54766 command_runner.go:130] > # 	"containers_oom_total",
	I0429 00:32:35.280804   54766 command_runner.go:130] > # 	"containers_oom",
	I0429 00:32:35.280811   54766 command_runner.go:130] > # 	"processes_defunct",
	I0429 00:32:35.280818   54766 command_runner.go:130] > # 	"operations_total",
	I0429 00:32:35.280825   54766 command_runner.go:130] > # 	"operations_latency_seconds",
	I0429 00:32:35.280831   54766 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0429 00:32:35.280836   54766 command_runner.go:130] > # 	"operations_errors_total",
	I0429 00:32:35.280840   54766 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0429 00:32:35.280852   54766 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0429 00:32:35.280857   54766 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0429 00:32:35.280861   54766 command_runner.go:130] > # 	"image_pulls_success_total",
	I0429 00:32:35.280868   54766 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0429 00:32:35.280872   54766 command_runner.go:130] > # 	"containers_oom_count_total",
	I0429 00:32:35.280881   54766 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0429 00:32:35.280888   54766 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0429 00:32:35.280891   54766 command_runner.go:130] > # ]
	I0429 00:32:35.280898   54766 command_runner.go:130] > # The port on which the metrics server will listen.
	I0429 00:32:35.280903   54766 command_runner.go:130] > # metrics_port = 9090
	I0429 00:32:35.280910   54766 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0429 00:32:35.280916   54766 command_runner.go:130] > # metrics_socket = ""
	I0429 00:32:35.280921   54766 command_runner.go:130] > # The certificate for the secure metrics server.
	I0429 00:32:35.280929   54766 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0429 00:32:35.280937   54766 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0429 00:32:35.280944   54766 command_runner.go:130] > # certificate on any modification event.
	I0429 00:32:35.280948   54766 command_runner.go:130] > # metrics_cert = ""
	I0429 00:32:35.280955   54766 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0429 00:32:35.280959   54766 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0429 00:32:35.280966   54766 command_runner.go:130] > # metrics_key = ""
	I0429 00:32:35.280972   54766 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0429 00:32:35.280978   54766 command_runner.go:130] > [crio.tracing]
	I0429 00:32:35.280983   54766 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0429 00:32:35.280989   54766 command_runner.go:130] > # enable_tracing = false
	I0429 00:32:35.280995   54766 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0429 00:32:35.281001   54766 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0429 00:32:35.281007   54766 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0429 00:32:35.281014   54766 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0429 00:32:35.281018   54766 command_runner.go:130] > # CRI-O NRI configuration.
	I0429 00:32:35.281022   54766 command_runner.go:130] > [crio.nri]
	I0429 00:32:35.281027   54766 command_runner.go:130] > # Globally enable or disable NRI.
	I0429 00:32:35.281030   54766 command_runner.go:130] > # enable_nri = false
	I0429 00:32:35.281034   54766 command_runner.go:130] > # NRI socket to listen on.
	I0429 00:32:35.281038   54766 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0429 00:32:35.281044   54766 command_runner.go:130] > # NRI plugin directory to use.
	I0429 00:32:35.281049   54766 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0429 00:32:35.281062   54766 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0429 00:32:35.281070   54766 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0429 00:32:35.281082   54766 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0429 00:32:35.281091   54766 command_runner.go:130] > # nri_disable_connections = false
	I0429 00:32:35.281098   54766 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0429 00:32:35.281107   54766 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0429 00:32:35.281116   54766 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0429 00:32:35.281122   54766 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0429 00:32:35.281128   54766 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0429 00:32:35.281135   54766 command_runner.go:130] > [crio.stats]
	I0429 00:32:35.281140   54766 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0429 00:32:35.281147   54766 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0429 00:32:35.281154   54766 command_runner.go:130] > # stats_collection_period = 0
	I0429 00:32:35.281186   54766 command_runner.go:130] ! time="2024-04-29 00:32:35.237188614Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0429 00:32:35.281200   54766 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0429 00:32:35.281311   54766 cni.go:84] Creating CNI manager for ""
	I0429 00:32:35.281321   54766 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 00:32:35.281329   54766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 00:32:35.281348   54766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-061470 NodeName:multinode-061470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 00:32:35.281481   54766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-061470"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 00:32:35.281535   54766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 00:32:35.293041   54766 command_runner.go:130] > kubeadm
	I0429 00:32:35.293055   54766 command_runner.go:130] > kubectl
	I0429 00:32:35.293058   54766 command_runner.go:130] > kubelet
	I0429 00:32:35.293267   54766 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 00:32:35.293310   54766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 00:32:35.304088   54766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0429 00:32:35.325328   54766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 00:32:35.345389   54766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0429 00:32:35.365404   54766 ssh_runner.go:195] Run: grep 192.168.39.59	control-plane.minikube.internal$ /etc/hosts
	I0429 00:32:35.370056   54766 command_runner.go:130] > 192.168.39.59	control-plane.minikube.internal
	I0429 00:32:35.370149   54766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:32:35.521576   54766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 00:32:35.537744   54766 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470 for IP: 192.168.39.59
	I0429 00:32:35.537766   54766 certs.go:194] generating shared ca certs ...
	I0429 00:32:35.537787   54766 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:32:35.537959   54766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0429 00:32:35.538011   54766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0429 00:32:35.538043   54766 certs.go:256] generating profile certs ...
	I0429 00:32:35.538133   54766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/client.key
	I0429 00:32:35.538191   54766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/apiserver.key.e02763ff
	I0429 00:32:35.538233   54766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/proxy-client.key
	I0429 00:32:35.538244   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 00:32:35.538259   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 00:32:35.538281   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 00:32:35.538294   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 00:32:35.538308   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 00:32:35.538322   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 00:32:35.538342   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 00:32:35.538360   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 00:32:35.538426   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0429 00:32:35.538459   54766 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0429 00:32:35.538469   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 00:32:35.538489   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0429 00:32:35.538511   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0429 00:32:35.538531   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0429 00:32:35.538567   54766 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:32:35.538596   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:32:35.538610   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem -> /usr/share/ca-certificates/20727.pem
	I0429 00:32:35.538622   54766 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> /usr/share/ca-certificates/207272.pem
	I0429 00:32:35.539223   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 00:32:35.566932   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 00:32:35.593631   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 00:32:35.619833   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 00:32:35.646406   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 00:32:35.673295   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 00:32:35.699643   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 00:32:35.725752   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/multinode-061470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 00:32:35.755049   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 00:32:35.782746   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0429 00:32:35.808284   54766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0429 00:32:35.834917   54766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 00:32:35.853682   54766 ssh_runner.go:195] Run: openssl version
	I0429 00:32:35.860989   54766 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 00:32:35.861059   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 00:32:35.874367   54766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:32:35.879825   54766 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:32:35.879983   54766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:32:35.880043   54766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:32:35.886504   54766 command_runner.go:130] > b5213941
	I0429 00:32:35.886784   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 00:32:35.898242   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0429 00:32:35.911836   54766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0429 00:32:35.917785   54766 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0429 00:32:35.917819   54766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0429 00:32:35.917859   54766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0429 00:32:35.924408   54766 command_runner.go:130] > 51391683
	I0429 00:32:35.924976   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0429 00:32:35.936601   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0429 00:32:35.949641   54766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0429 00:32:35.955102   54766 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0429 00:32:35.955142   54766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0429 00:32:35.955193   54766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0429 00:32:35.961553   54766 command_runner.go:130] > 3ec20f2e
	I0429 00:32:35.961756   54766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 00:32:35.973244   54766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 00:32:35.978657   54766 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 00:32:35.978688   54766 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0429 00:32:35.978699   54766 command_runner.go:130] > Device: 253,1	Inode: 1057302     Links: 1
	I0429 00:32:35.978708   54766 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 00:32:35.978717   54766 command_runner.go:130] > Access: 2024-04-29 00:25:43.098932222 +0000
	I0429 00:32:35.978725   54766 command_runner.go:130] > Modify: 2024-04-29 00:25:43.098932222 +0000
	I0429 00:32:35.978732   54766 command_runner.go:130] > Change: 2024-04-29 00:25:43.098932222 +0000
	I0429 00:32:35.978743   54766 command_runner.go:130] >  Birth: 2024-04-29 00:25:43.098932222 +0000
	I0429 00:32:35.978860   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 00:32:35.985032   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:35.985264   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 00:32:35.991557   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:35.991620   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 00:32:35.997326   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:35.997642   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 00:32:36.003543   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:36.003609   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 00:32:36.009637   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:36.009696   54766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 00:32:36.015367   54766 command_runner.go:130] > Certificate will not expire
	I0429 00:32:36.015646   54766 kubeadm.go:391] StartCluster: {Name:multinode-061470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-061470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.153 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:32:36.015765   54766 cri.go:56] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 00:32:36.015799   54766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 00:32:36.057490   54766 command_runner.go:130] > b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039
	I0429 00:32:36.057519   54766 command_runner.go:130] > 39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75
	I0429 00:32:36.057528   54766 command_runner.go:130] > b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059
	I0429 00:32:36.057538   54766 command_runner.go:130] > 54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6
	I0429 00:32:36.057547   54766 command_runner.go:130] > 97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f
	I0429 00:32:36.057556   54766 command_runner.go:130] > 7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29
	I0429 00:32:36.057564   54766 command_runner.go:130] > feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23
	I0429 00:32:36.057577   54766 command_runner.go:130] > 3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8
	I0429 00:32:36.057603   54766 cri.go:91] found id: "b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039"
	I0429 00:32:36.057614   54766 cri.go:91] found id: "39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75"
	I0429 00:32:36.057617   54766 cri.go:91] found id: "b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059"
	I0429 00:32:36.057620   54766 cri.go:91] found id: "54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6"
	I0429 00:32:36.057623   54766 cri.go:91] found id: "97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f"
	I0429 00:32:36.057626   54766 cri.go:91] found id: "7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29"
	I0429 00:32:36.057629   54766 cri.go:91] found id: "feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23"
	I0429 00:32:36.057632   54766 cri.go:91] found id: "3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8"
	I0429 00:32:36.057634   54766 cri.go:91] found id: ""
	I0429 00:32:36.057676   54766 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.082970026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714350989082945622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82a638e5-bbfd-4fde-8dec-e23504433f62 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.083487174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91b73388-7b3a-411a-93f1-90fd43f9aa34 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.083604195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91b73388-7b3a-411a-93f1-90fd43f9aa34 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.084074686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b5ac9b0cc33883a50fd9674275191db8f70709b986c0ba81c0e362aad173df,PodSandboxId:11a98ae46b871c2424e18240709ec880ae353bb84bf00d93c5e401ce373aaeaa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714350796826593379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e26fec3a142e13faafa8ad14b597baa4f169d898e442d71ea6593487e179dad,PodSandboxId:7321ac60318dc36e1bad6ba03af91c9b937865c02ac5c28768c3e04449fd28ff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714350763219241740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64ecb02161b1f0eae6b006a2ff262795087379662bbfe4f533b0c23db813ef4,PodSandboxId:5eb7a0edb505d46781f1b243bfb44ca4c5da556cfa70be541745641eab2f8ffa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714350763210200552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:434862f131515b0ec8bd3f1e8282e616a055da4a1f656365faaf7995d4859312,PodSandboxId:07fd2d5fe19c65029756045b93a79459240e6db1141be65fae10db39fa8c17ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714350763061060391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},An
notations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea0e03bd31c39cdc62d68ffb54d9b093e1972a03f177dd3044c678276c372b8,PodSandboxId:20df3ee242e021fe3c6ddb2912ca44f9aaea43551447852a53d36bbac4602211,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714350762981650451,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.ku
bernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42726d45ab665060d58b0ec625d7e61b8c4c797c9f574d7014ff557cd3b869b1,PodSandboxId:9ab5d11f11726f505e5192e6254501e80263cd25a8050ca9f861a8f32d02327e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714350758297992228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b818219749ed474e9eb70ef151dab9f30534328bef9c0392cdb66452ff83a74e,PodSandboxId:0a141e8c6c052b63c751a4d91278b043a7b596d03103cc7fe38669e9729acdaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714350758271619402,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.container.hash: 684e9b0f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54ce77f7f9a167e01e6facda017ced58943135aefc433b77571c564a98ce4f,PodSandboxId:01138626639dbc7e87846f2ae5d9bd5116f42d688bab7d48a331a7e23aa90d0a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714350758243235511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.container.hash: e5a050a9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f086e122efd0a79db5726a580c8eb8fa99eae5ef2c1d677fb3aa2b679bfb2254,PodSandboxId:49f8e4233acee3d2381c28543d15d29ecb15fce13b618d8f6aae3c1f5cc03895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714350758180390934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff2d8cb543c19f935d9c27d3aac5a442c16d6865a8f1c527d92e67889886f00,PodSandboxId:dfca3a66574d8167fadd34f1abac5706488495a4a37b8a0a15e5bd58cf9f55d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714350449425275009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039,PodSandboxId:12ec373cd24498abc6408815fe4fa91c2c8b045a1e3017c3abddf6dbdef634b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714350398670874161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75,PodSandboxId:60584a1c18ea626b3110754275f3babc64792abd09c2a39f33b1be3fa0509c64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714350398632218486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059,PodSandboxId:b8932dfb71a1719c922eee271a6a39aa39fdcd77238e38b5d725e5b8c312cc2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714350367555264264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.kubernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6,PodSandboxId:2bb00020d94fba790b5d58be7024d99ce0931b5d287ae21cb43dcc34bc001240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714350366888723697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38
-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f,PodSandboxId:ceb5f4560679f615358bb4af1b2c11decfbcb9e187bf987ba508233712ff8918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714350347046108852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29,PodSandboxId:ddd9273f314062659291627862a001f384eebf1fc1f4056d6efd5643e46bb5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714350347035526788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.
container.hash: 684e9b0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23,PodSandboxId:693b9ed36dcc45cae5322b8d498485c2083c0c8e56992b61b3ff71b120c02bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714350347000354463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8,PodSandboxId:fe519d3311f341b7c2a63faac9551fd45c17f3bf02147c5ef4417da6706cfe19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714350346931161196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: e5a050a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91b73388-7b3a-411a-93f1-90fd43f9aa34 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.130613483Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42db39bd-b8a1-4f6d-9b8c-650d34608dac name=/runtime.v1.RuntimeService/Version
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.130716752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42db39bd-b8a1-4f6d-9b8c-650d34608dac name=/runtime.v1.RuntimeService/Version
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.132415578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=312f7bc3-f9ce-46e6-970b-a9d5a8e1316c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.132943635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714350989132808798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=312f7bc3-f9ce-46e6-970b-a9d5a8e1316c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.133609164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=108aa2e6-62dd-4fc1-ade9-8e2738e71143 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.133692910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=108aa2e6-62dd-4fc1-ade9-8e2738e71143 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.134216460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b5ac9b0cc33883a50fd9674275191db8f70709b986c0ba81c0e362aad173df,PodSandboxId:11a98ae46b871c2424e18240709ec880ae353bb84bf00d93c5e401ce373aaeaa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714350796826593379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e26fec3a142e13faafa8ad14b597baa4f169d898e442d71ea6593487e179dad,PodSandboxId:7321ac60318dc36e1bad6ba03af91c9b937865c02ac5c28768c3e04449fd28ff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714350763219241740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64ecb02161b1f0eae6b006a2ff262795087379662bbfe4f533b0c23db813ef4,PodSandboxId:5eb7a0edb505d46781f1b243bfb44ca4c5da556cfa70be541745641eab2f8ffa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714350763210200552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:434862f131515b0ec8bd3f1e8282e616a055da4a1f656365faaf7995d4859312,PodSandboxId:07fd2d5fe19c65029756045b93a79459240e6db1141be65fae10db39fa8c17ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714350763061060391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},An
notations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea0e03bd31c39cdc62d68ffb54d9b093e1972a03f177dd3044c678276c372b8,PodSandboxId:20df3ee242e021fe3c6ddb2912ca44f9aaea43551447852a53d36bbac4602211,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714350762981650451,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.ku
bernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42726d45ab665060d58b0ec625d7e61b8c4c797c9f574d7014ff557cd3b869b1,PodSandboxId:9ab5d11f11726f505e5192e6254501e80263cd25a8050ca9f861a8f32d02327e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714350758297992228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b818219749ed474e9eb70ef151dab9f30534328bef9c0392cdb66452ff83a74e,PodSandboxId:0a141e8c6c052b63c751a4d91278b043a7b596d03103cc7fe38669e9729acdaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714350758271619402,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.container.hash: 684e9b0f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54ce77f7f9a167e01e6facda017ced58943135aefc433b77571c564a98ce4f,PodSandboxId:01138626639dbc7e87846f2ae5d9bd5116f42d688bab7d48a331a7e23aa90d0a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714350758243235511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.container.hash: e5a050a9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f086e122efd0a79db5726a580c8eb8fa99eae5ef2c1d677fb3aa2b679bfb2254,PodSandboxId:49f8e4233acee3d2381c28543d15d29ecb15fce13b618d8f6aae3c1f5cc03895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714350758180390934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff2d8cb543c19f935d9c27d3aac5a442c16d6865a8f1c527d92e67889886f00,PodSandboxId:dfca3a66574d8167fadd34f1abac5706488495a4a37b8a0a15e5bd58cf9f55d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714350449425275009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039,PodSandboxId:12ec373cd24498abc6408815fe4fa91c2c8b045a1e3017c3abddf6dbdef634b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714350398670874161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75,PodSandboxId:60584a1c18ea626b3110754275f3babc64792abd09c2a39f33b1be3fa0509c64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714350398632218486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059,PodSandboxId:b8932dfb71a1719c922eee271a6a39aa39fdcd77238e38b5d725e5b8c312cc2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714350367555264264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.kubernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6,PodSandboxId:2bb00020d94fba790b5d58be7024d99ce0931b5d287ae21cb43dcc34bc001240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714350366888723697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38
-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f,PodSandboxId:ceb5f4560679f615358bb4af1b2c11decfbcb9e187bf987ba508233712ff8918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714350347046108852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29,PodSandboxId:ddd9273f314062659291627862a001f384eebf1fc1f4056d6efd5643e46bb5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714350347035526788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.
container.hash: 684e9b0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23,PodSandboxId:693b9ed36dcc45cae5322b8d498485c2083c0c8e56992b61b3ff71b120c02bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714350347000354463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8,PodSandboxId:fe519d3311f341b7c2a63faac9551fd45c17f3bf02147c5ef4417da6706cfe19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714350346931161196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: e5a050a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=108aa2e6-62dd-4fc1-ade9-8e2738e71143 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.181728485Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25aa190f-bb4b-43c0-a6ee-8d41a35878c5 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.181877815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25aa190f-bb4b-43c0-a6ee-8d41a35878c5 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.182968822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54773aca-dd92-4e68-9e4e-8cb234545bfa name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.183386376Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714350989183366142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54773aca-dd92-4e68-9e4e-8cb234545bfa name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.184185516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df1f0e46-5e25-4d71-b1db-e4bc487af17d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.184272426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df1f0e46-5e25-4d71-b1db-e4bc487af17d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.184631003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b5ac9b0cc33883a50fd9674275191db8f70709b986c0ba81c0e362aad173df,PodSandboxId:11a98ae46b871c2424e18240709ec880ae353bb84bf00d93c5e401ce373aaeaa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714350796826593379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e26fec3a142e13faafa8ad14b597baa4f169d898e442d71ea6593487e179dad,PodSandboxId:7321ac60318dc36e1bad6ba03af91c9b937865c02ac5c28768c3e04449fd28ff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714350763219241740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64ecb02161b1f0eae6b006a2ff262795087379662bbfe4f533b0c23db813ef4,PodSandboxId:5eb7a0edb505d46781f1b243bfb44ca4c5da556cfa70be541745641eab2f8ffa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714350763210200552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:434862f131515b0ec8bd3f1e8282e616a055da4a1f656365faaf7995d4859312,PodSandboxId:07fd2d5fe19c65029756045b93a79459240e6db1141be65fae10db39fa8c17ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714350763061060391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},An
notations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea0e03bd31c39cdc62d68ffb54d9b093e1972a03f177dd3044c678276c372b8,PodSandboxId:20df3ee242e021fe3c6ddb2912ca44f9aaea43551447852a53d36bbac4602211,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714350762981650451,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.ku
bernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42726d45ab665060d58b0ec625d7e61b8c4c797c9f574d7014ff557cd3b869b1,PodSandboxId:9ab5d11f11726f505e5192e6254501e80263cd25a8050ca9f861a8f32d02327e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714350758297992228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b818219749ed474e9eb70ef151dab9f30534328bef9c0392cdb66452ff83a74e,PodSandboxId:0a141e8c6c052b63c751a4d91278b043a7b596d03103cc7fe38669e9729acdaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714350758271619402,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.container.hash: 684e9b0f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54ce77f7f9a167e01e6facda017ced58943135aefc433b77571c564a98ce4f,PodSandboxId:01138626639dbc7e87846f2ae5d9bd5116f42d688bab7d48a331a7e23aa90d0a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714350758243235511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.container.hash: e5a050a9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f086e122efd0a79db5726a580c8eb8fa99eae5ef2c1d677fb3aa2b679bfb2254,PodSandboxId:49f8e4233acee3d2381c28543d15d29ecb15fce13b618d8f6aae3c1f5cc03895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714350758180390934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff2d8cb543c19f935d9c27d3aac5a442c16d6865a8f1c527d92e67889886f00,PodSandboxId:dfca3a66574d8167fadd34f1abac5706488495a4a37b8a0a15e5bd58cf9f55d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714350449425275009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039,PodSandboxId:12ec373cd24498abc6408815fe4fa91c2c8b045a1e3017c3abddf6dbdef634b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714350398670874161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75,PodSandboxId:60584a1c18ea626b3110754275f3babc64792abd09c2a39f33b1be3fa0509c64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714350398632218486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059,PodSandboxId:b8932dfb71a1719c922eee271a6a39aa39fdcd77238e38b5d725e5b8c312cc2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714350367555264264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.kubernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6,PodSandboxId:2bb00020d94fba790b5d58be7024d99ce0931b5d287ae21cb43dcc34bc001240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714350366888723697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38
-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f,PodSandboxId:ceb5f4560679f615358bb4af1b2c11decfbcb9e187bf987ba508233712ff8918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714350347046108852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29,PodSandboxId:ddd9273f314062659291627862a001f384eebf1fc1f4056d6efd5643e46bb5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714350347035526788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.
container.hash: 684e9b0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23,PodSandboxId:693b9ed36dcc45cae5322b8d498485c2083c0c8e56992b61b3ff71b120c02bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714350347000354463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8,PodSandboxId:fe519d3311f341b7c2a63faac9551fd45c17f3bf02147c5ef4417da6706cfe19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714350346931161196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: e5a050a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df1f0e46-5e25-4d71-b1db-e4bc487af17d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.230456268Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4694072f-daa2-4226-a702-a4c8fa2c4ad7 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.230560384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4694072f-daa2-4226-a702-a4c8fa2c4ad7 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.232049306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ca05be7-b09c-45db-aae8-d6dd94daf887 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.232632586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714350989232603938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ca05be7-b09c-45db-aae8-d6dd94daf887 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.233160528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55c6e968-a391-4b55-9738-5f462d19f0eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.233252400Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55c6e968-a391-4b55-9738-5f462d19f0eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:36:29 multinode-061470 crio[2855]: time="2024-04-29 00:36:29.235225073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b5ac9b0cc33883a50fd9674275191db8f70709b986c0ba81c0e362aad173df,PodSandboxId:11a98ae46b871c2424e18240709ec880ae353bb84bf00d93c5e401ce373aaeaa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714350796826593379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e26fec3a142e13faafa8ad14b597baa4f169d898e442d71ea6593487e179dad,PodSandboxId:7321ac60318dc36e1bad6ba03af91c9b937865c02ac5c28768c3e04449fd28ff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714350763219241740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64ecb02161b1f0eae6b006a2ff262795087379662bbfe4f533b0c23db813ef4,PodSandboxId:5eb7a0edb505d46781f1b243bfb44ca4c5da556cfa70be541745641eab2f8ffa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714350763210200552,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:434862f131515b0ec8bd3f1e8282e616a055da4a1f656365faaf7995d4859312,PodSandboxId:07fd2d5fe19c65029756045b93a79459240e6db1141be65fae10db39fa8c17ed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714350763061060391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},An
notations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea0e03bd31c39cdc62d68ffb54d9b093e1972a03f177dd3044c678276c372b8,PodSandboxId:20df3ee242e021fe3c6ddb2912ca44f9aaea43551447852a53d36bbac4602211,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714350762981650451,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.ku
bernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42726d45ab665060d58b0ec625d7e61b8c4c797c9f574d7014ff557cd3b869b1,PodSandboxId:9ab5d11f11726f505e5192e6254501e80263cd25a8050ca9f861a8f32d02327e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714350758297992228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b818219749ed474e9eb70ef151dab9f30534328bef9c0392cdb66452ff83a74e,PodSandboxId:0a141e8c6c052b63c751a4d91278b043a7b596d03103cc7fe38669e9729acdaa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714350758271619402,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.container.hash: 684e9b0f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e54ce77f7f9a167e01e6facda017ced58943135aefc433b77571c564a98ce4f,PodSandboxId:01138626639dbc7e87846f2ae5d9bd5116f42d688bab7d48a331a7e23aa90d0a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714350758243235511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.container.hash: e5a050a9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f086e122efd0a79db5726a580c8eb8fa99eae5ef2c1d677fb3aa2b679bfb2254,PodSandboxId:49f8e4233acee3d2381c28543d15d29ecb15fce13b618d8f6aae3c1f5cc03895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714350758180390934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff2d8cb543c19f935d9c27d3aac5a442c16d6865a8f1c527d92e67889886f00,PodSandboxId:dfca3a66574d8167fadd34f1abac5706488495a4a37b8a0a15e5bd58cf9f55d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714350449425275009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hbcvz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02c11dff-48e7-4ee6-b95a-ff6d46ecd635,},Annotations:map[string]string{io.kubernetes.container.hash: 322a7f41,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039,PodSandboxId:12ec373cd24498abc6408815fe4fa91c2c8b045a1e3017c3abddf6dbdef634b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714350398670874161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r4bhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db303d0-3a93-40b8-a390-a902ebcaa71b,},Annotations:map[string]string{io.kubernetes.container.hash: b3318501,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b8488302397fe662c5db192b031eace2b49ff60ca7ce80dfa4d236ea4eeb75,PodSandboxId:60584a1c18ea626b3110754275f3babc64792abd09c2a39f33b1be3fa0509c64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714350398632218486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 313d1824-ed50-4033-8c64-33d4dc4b23a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2fb6767e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059,PodSandboxId:b8932dfb71a1719c922eee271a6a39aa39fdcd77238e38b5d725e5b8c312cc2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714350367555264264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xgkq,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 2e05361a-9929-4b79-988b-c81f3e3063bf,},Annotations:map[string]string{io.kubernetes.container.hash: c023c6e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6,PodSandboxId:2bb00020d94fba790b5d58be7024d99ce0931b5d287ae21cb43dcc34bc001240,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714350366888723697,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zqmjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ab0204-4bf4-4426-9b38
-b80b01ddccec,},Annotations:map[string]string{io.kubernetes.container.hash: 89d37956,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f,PodSandboxId:ceb5f4560679f615358bb4af1b2c11decfbcb9e187bf987ba508233712ff8918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714350347046108852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230c41ef054080f873d179de1c70599b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29,PodSandboxId:ddd9273f314062659291627862a001f384eebf1fc1f4056d6efd5643e46bb5b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714350347035526788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98ed760e9f019c872bb651a9ef7e3cb,},Annotations:map[string]string{io.kubernetes.
container.hash: 684e9b0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23,PodSandboxId:693b9ed36dcc45cae5322b8d498485c2083c0c8e56992b61b3ff71b120c02bd5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714350347000354463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156ce3292a2dee2299ae201d085387c1,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8,PodSandboxId:fe519d3311f341b7c2a63faac9551fd45c17f3bf02147c5ef4417da6706cfe19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714350346931161196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-061470,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96efb3366896582cd0e39b44db5fb706,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: e5a050a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55c6e968-a391-4b55-9738-5f462d19f0eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11b5ac9b0cc33       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   11a98ae46b871       busybox-fc5497c4f-hbcvz
	6e26fec3a142e       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   7321ac60318dc       kindnet-zqmjk
	a64ecb02161b1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   5eb7a0edb505d       coredns-7db6d8ff4d-r4bhp
	434862f131515       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   07fd2d5fe19c6       storage-provisioner
	9ea0e03bd31c3       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago       Running             kube-proxy                1                   20df3ee242e02       kube-proxy-4xgkq
	42726d45ab665       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago       Running             kube-scheduler            1                   9ab5d11f11726       kube-scheduler-multinode-061470
	b818219749ed4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   0a141e8c6c052       etcd-multinode-061470
	4e54ce77f7f9a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago       Running             kube-apiserver            1                   01138626639db       kube-apiserver-multinode-061470
	f086e122efd0a       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago       Running             kube-controller-manager   1                   49f8e4233acee       kube-controller-manager-multinode-061470
	dff2d8cb543c1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   dfca3a66574d8       busybox-fc5497c4f-hbcvz
	b96cb4f67c31d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   12ec373cd2449       coredns-7db6d8ff4d-r4bhp
	39b8488302397       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   60584a1c18ea6       storage-provisioner
	b61b00d21f43e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      10 minutes ago      Exited              kube-proxy                0                   b8932dfb71a17       kube-proxy-4xgkq
	54136ed2ec098       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      10 minutes ago      Exited              kindnet-cni               0                   2bb00020d94fb       kindnet-zqmjk
	97d87b80717b4       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      10 minutes ago      Exited              kube-scheduler            0                   ceb5f4560679f       kube-scheduler-multinode-061470
	7d498bb9fe676       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   ddd9273f31406       etcd-multinode-061470
	feb59e1dcd4cb       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      10 minutes ago      Exited              kube-controller-manager   0                   693b9ed36dcc4       kube-controller-manager-multinode-061470
	3831c13bc6184       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      10 minutes ago      Exited              kube-apiserver            0                   fe519d3311f34       kube-apiserver-multinode-061470
	
	
	==> coredns [a64ecb02161b1f0eae6b006a2ff262795087379662bbfe4f533b0c23db813ef4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45756 - 18737 "HINFO IN 434757041915740770.52446483414376084. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.013748968s
	
	
	==> coredns [b96cb4f67c31d819e2e8f85818248afef721068b250f3a16d32bebcca4760039] <==
	[INFO] 10.244.1.2:39928 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00179859s
	[INFO] 10.244.1.2:48329 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177624s
	[INFO] 10.244.1.2:60954 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114302s
	[INFO] 10.244.1.2:39143 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001996062s
	[INFO] 10.244.1.2:57685 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000248408s
	[INFO] 10.244.1.2:55056 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000298412s
	[INFO] 10.244.1.2:57477 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000258113s
	[INFO] 10.244.0.3:44859 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078253s
	[INFO] 10.244.0.3:58305 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00005714s
	[INFO] 10.244.0.3:58583 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043115s
	[INFO] 10.244.0.3:35160 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093128s
	[INFO] 10.244.1.2:46440 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000296076s
	[INFO] 10.244.1.2:53786 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008941s
	[INFO] 10.244.1.2:55749 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075948s
	[INFO] 10.244.1.2:38358 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068043s
	[INFO] 10.244.0.3:46826 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116218s
	[INFO] 10.244.0.3:48256 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105442s
	[INFO] 10.244.0.3:50215 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094963s
	[INFO] 10.244.0.3:36144 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008138s
	[INFO] 10.244.1.2:50679 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126126s
	[INFO] 10.244.1.2:56121 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131799s
	[INFO] 10.244.1.2:34995 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091915s
	[INFO] 10.244.1.2:36269 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123998s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-061470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-061470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-061470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T00_25_53_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:25:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-061470
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:36:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:32:42 +0000   Mon, 29 Apr 2024 00:25:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:32:42 +0000   Mon, 29 Apr 2024 00:25:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:32:42 +0000   Mon, 29 Apr 2024 00:25:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:32:42 +0000   Mon, 29 Apr 2024 00:26:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.59
	  Hostname:    multinode-061470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ce16e3900734f698b7f07cee0a80904
	  System UUID:                3ce16e39-0073-4f69-8b7f-07cee0a80904
	  Boot ID:                    e490d2bd-22eb-4348-b16c-88ecf79bfed6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hbcvz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 coredns-7db6d8ff4d-r4bhp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-061470                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-zqmjk                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-061470             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-061470    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-4xgkq                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-061470             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-061470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-061470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-061470 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-061470 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-061470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-061470 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-061470 event: Registered Node multinode-061470 in Controller
	  Normal  NodeReady                9m51s                  kubelet          Node multinode-061470 status is now: NodeReady
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node multinode-061470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node multinode-061470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m52s)  kubelet          Node multinode-061470 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m34s                  node-controller  Node multinode-061470 event: Registered Node multinode-061470 in Controller
	
	
	Name:               multinode-061470-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-061470-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=multinode-061470
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T00_33_22_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:33:22 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-061470-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:34:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 00:33:52 +0000   Mon, 29 Apr 2024 00:34:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 00:33:52 +0000   Mon, 29 Apr 2024 00:34:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 00:33:52 +0000   Mon, 29 Apr 2024 00:34:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 00:33:52 +0000   Mon, 29 Apr 2024 00:34:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.153
	  Hostname:    multinode-061470-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27acee0a5e2241b4bd50a401d18843cf
	  System UUID:                27acee0a-5e22-41b4-bd50-a401d18843cf
	  Boot ID:                    350f2cfb-b4bb-4845-890b-f2a283ffbd2b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vxgzh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 kindnet-gnscp              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m17s
	  kube-system                 kube-proxy-xzttx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  Starting                 3m2s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m18s (x2 over 9m18s)  kubelet          Node multinode-061470-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s (x2 over 9m18s)  kubelet          Node multinode-061470-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m18s (x2 over 9m18s)  kubelet          Node multinode-061470-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m7s                   kubelet          Node multinode-061470-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m7s)    kubelet          Node multinode-061470-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m7s)    kubelet          Node multinode-061470-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m7s)    kubelet          Node multinode-061470-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m58s                  kubelet          Node multinode-061470-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                   node-controller  Node multinode-061470-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.068144] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.177533] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.166514] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.299233] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.853304] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.062834] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.192418] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.840254] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.715682] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.076554] kauditd_printk_skb: 41 callbacks suppressed
	[Apr29 00:26] systemd-fstab-generator[1470]: Ignoring "noauto" option for root device
	[  +0.137853] kauditd_printk_skb: 21 callbacks suppressed
	[ +33.128082] kauditd_printk_skb: 60 callbacks suppressed
	[Apr29 00:27] kauditd_printk_skb: 12 callbacks suppressed
	[Apr29 00:32] systemd-fstab-generator[2773]: Ignoring "noauto" option for root device
	[  +0.152624] systemd-fstab-generator[2785]: Ignoring "noauto" option for root device
	[  +0.182475] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.147543] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.298584] systemd-fstab-generator[2839]: Ignoring "noauto" option for root device
	[  +0.788816] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +1.822195] systemd-fstab-generator[3064]: Ignoring "noauto" option for root device
	[  +5.642751] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.898555] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.841080] systemd-fstab-generator[3889]: Ignoring "noauto" option for root device
	[Apr29 00:33] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [7d498bb9fe6766a3b40b7dfdba0c66f2135181e10b6e275a37923d235740fa29] <==
	{"level":"info","ts":"2024-04-29T00:25:47.707164Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:25:47.707202Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:25:47.712911Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:25:47.713077Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T00:25:47.717248Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T00:25:47.779674Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.59:2379"}
	{"level":"warn","ts":"2024-04-29T00:27:12.097624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.940301ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14211266022879824445 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:45388f273e548a3c>","response":"size:42"}
	{"level":"info","ts":"2024-04-29T00:27:12.097981Z","caller":"traceutil/trace.go:171","msg":"trace[1998598625] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"184.254409ms","start":"2024-04-29T00:27:11.913694Z","end":"2024-04-29T00:27:12.097949Z","steps":["trace[1998598625] 'process raft request'  (duration: 184.186259ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:27:12.098289Z","caller":"traceutil/trace.go:171","msg":"trace[1222526137] linearizableReadLoop","detail":"{readStateIndex:518; appliedIndex:517; }","duration":"244.509979ms","start":"2024-04-29T00:27:11.853769Z","end":"2024-04-29T00:27:12.098279Z","steps":["trace[1222526137] 'read index received'  (duration: 63.563665ms)","trace[1222526137] 'applied index is now lower than readState.Index'  (duration: 180.945365ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T00:27:12.098412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.623491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-061470-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-04-29T00:27:12.098459Z","caller":"traceutil/trace.go:171","msg":"trace[268372184] range","detail":"{range_begin:/registry/minions/multinode-061470-m02; range_end:; response_count:1; response_revision:493; }","duration":"244.705932ms","start":"2024-04-29T00:27:11.853747Z","end":"2024-04-29T00:27:12.098452Z","steps":["trace[268372184] 'agreement among raft nodes before linearized reading'  (duration: 244.59642ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:28:04.797642Z","caller":"traceutil/trace.go:171","msg":"trace[1370174191] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"191.342121ms","start":"2024-04-29T00:28:04.606283Z","end":"2024-04-29T00:28:04.797625Z","steps":["trace[1370174191] 'process raft request'  (duration: 142.758082ms)","trace[1370174191] 'compare'  (duration: 48.108418ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T00:28:04.797939Z","caller":"traceutil/trace.go:171","msg":"trace[907953116] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"174.800225ms","start":"2024-04-29T00:28:04.623122Z","end":"2024-04-29T00:28:04.797922Z","steps":["trace[907953116] 'process raft request'  (duration: 174.200967ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T00:28:05.633344Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.998326ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-061470-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T00:28:05.633427Z","caller":"traceutil/trace.go:171","msg":"trace[7548960] range","detail":"{range_begin:/registry/csinodes/multinode-061470-m03; range_end:; response_count:0; response_revision:656; }","duration":"124.123552ms","start":"2024-04-29T00:28:05.509287Z","end":"2024-04-29T00:28:05.633411Z","steps":["trace[7548960] 'range keys from in-memory index tree'  (duration: 123.919529ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T00:31:02.594355Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-29T00:31:02.594528Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-061470","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.59:2380"],"advertise-client-urls":["https://192.168.39.59:2379"]}
	{"level":"warn","ts":"2024-04-29T00:31:02.595014Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.59:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:31:02.595052Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.59:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:31:02.600543Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:31:02.600676Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T00:31:02.681346Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8376b9efef0ac538","current-leader-member-id":"8376b9efef0ac538"}
	{"level":"info","ts":"2024-04-29T00:31:02.684385Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.59:2380"}
	{"level":"info","ts":"2024-04-29T00:31:02.684549Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.59:2380"}
	{"level":"info","ts":"2024-04-29T00:31:02.684562Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-061470","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.59:2380"],"advertise-client-urls":["https://192.168.39.59:2379"]}
	
	
	==> etcd [b818219749ed474e9eb70ef151dab9f30534328bef9c0392cdb66452ff83a74e] <==
	{"level":"info","ts":"2024-04-29T00:32:39.022461Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:32:39.022481Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:32:39.022734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 switched to configuration voters=(9472963306379199800)"}
	{"level":"info","ts":"2024-04-29T00:32:39.02288Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ec2082d3763590b8","local-member-id":"8376b9efef0ac538","added-peer-id":"8376b9efef0ac538","added-peer-peer-urls":["https://192.168.39.59:2380"]}
	{"level":"info","ts":"2024-04-29T00:32:39.023031Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ec2082d3763590b8","local-member-id":"8376b9efef0ac538","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:32:39.023082Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:32:39.030335Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T00:32:39.030575Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8376b9efef0ac538","initial-advertise-peer-urls":["https://192.168.39.59:2380"],"listen-peer-urls":["https://192.168.39.59:2380"],"advertise-client-urls":["https://192.168.39.59:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.59:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T00:32:39.032952Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T00:32:39.033237Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.59:2380"}
	{"level":"info","ts":"2024-04-29T00:32:39.033273Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.59:2380"}
	{"level":"info","ts":"2024-04-29T00:32:40.861617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T00:32:40.861657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T00:32:40.861708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 received MsgPreVoteResp from 8376b9efef0ac538 at term 2"}
	{"level":"info","ts":"2024-04-29T00:32:40.861721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T00:32:40.861727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 received MsgVoteResp from 8376b9efef0ac538 at term 3"}
	{"level":"info","ts":"2024-04-29T00:32:40.861735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8376b9efef0ac538 became leader at term 3"}
	{"level":"info","ts":"2024-04-29T00:32:40.861747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8376b9efef0ac538 elected leader 8376b9efef0ac538 at term 3"}
	{"level":"info","ts":"2024-04-29T00:32:40.868149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:32:40.868097Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8376b9efef0ac538","local-member-attributes":"{Name:multinode-061470 ClientURLs:[https://192.168.39.59:2379]}","request-path":"/0/members/8376b9efef0ac538/attributes","cluster-id":"ec2082d3763590b8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T00:32:40.869113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:32:40.869367Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:32:40.869382Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T00:32:40.87147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.59:2379"}
	{"level":"info","ts":"2024-04-29T00:32:40.872325Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:36:29 up 11 min,  0 users,  load average: 0.24, 0.27, 0.17
	Linux multinode-061470 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [54136ed2ec0982f9eaa6f5d4763dccb106abd1085ea34d307f4f7d473615e8d6] <==
	I0429 00:30:17.956358       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	I0429 00:30:27.962131       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:30:27.962182       1 main.go:227] handling current node
	I0429 00:30:27.962194       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:30:27.962200       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:30:27.962335       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:30:27.962374       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	I0429 00:30:37.978753       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:30:37.979026       1 main.go:227] handling current node
	I0429 00:30:37.979137       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:30:37.979151       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:30:37.979497       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:30:37.979616       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	I0429 00:30:47.992965       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:30:47.993010       1 main.go:227] handling current node
	I0429 00:30:47.993021       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:30:47.993028       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:30:47.993137       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:30:47.993167       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	I0429 00:30:58.002051       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:30:58.002149       1 main.go:227] handling current node
	I0429 00:30:58.002177       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:30:58.002196       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:30:58.002359       1 main.go:223] Handling node with IPs: map[192.168.39.138:{}]
	I0429 00:30:58.002408       1 main.go:250] Node multinode-061470-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [6e26fec3a142e13faafa8ad14b597baa4f169d898e442d71ea6593487e179dad] <==
	I0429 00:35:24.304232       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:35:34.319210       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:35:34.319480       1 main.go:227] handling current node
	I0429 00:35:34.319607       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:35:34.319751       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:35:44.327332       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:35:44.327381       1 main.go:227] handling current node
	I0429 00:35:44.327399       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:35:44.327410       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:35:54.331945       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:35:54.331992       1 main.go:227] handling current node
	I0429 00:35:54.332003       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:35:54.332009       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:36:04.344717       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:36:04.345051       1 main.go:227] handling current node
	I0429 00:36:04.345117       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:36:04.345143       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:36:14.361641       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:36:14.361992       1 main.go:227] handling current node
	I0429 00:36:14.362049       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:36:14.362073       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	I0429 00:36:24.374644       1 main.go:223] Handling node with IPs: map[192.168.39.59:{}]
	I0429 00:36:24.374694       1 main.go:227] handling current node
	I0429 00:36:24.374704       1 main.go:223] Handling node with IPs: map[192.168.39.153:{}]
	I0429 00:36:24.374709       1 main.go:250] Node multinode-061470-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3831c13bc6184258db186f5acdb441da8d018824ee0fca4d83979b9b00e19db8] <==
	E0429 00:31:02.611132       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0429 00:31:02.611347       1 controller.go:84] Shutting down OpenAPI AggregationController
	E0429 00:31:02.612348       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 00:31:02.613162       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 00:31:02.613220       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 00:31:02.613240       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0429 00:31:02.613687       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0429 00:31:02.613808       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0429 00:31:02.614057       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0429 00:31:02.614112       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0429 00:31:02.614136       1 controller.go:167] Shutting down OpenAPI controller
	I0429 00:31:02.614173       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0429 00:31:02.614195       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0429 00:31:02.614238       1 establishing_controller.go:87] Shutting down EstablishingController
	I0429 00:31:02.614259       1 naming_controller.go:302] Shutting down NamingConditionController
	I0429 00:31:02.614308       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0429 00:31:02.614321       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0429 00:31:02.614332       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0429 00:31:02.614362       1 available_controller.go:439] Shutting down AvailableConditionController
	I0429 00:31:02.614374       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0429 00:31:02.614385       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0429 00:31:02.614403       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0429 00:31:02.614445       1 controller.go:129] Ending legacy_token_tracking_controller
	I0429 00:31:02.614451       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0429 00:31:02.614467       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	
	
	==> kube-apiserver [4e54ce77f7f9a167e01e6facda017ced58943135aefc433b77571c564a98ce4f] <==
	I0429 00:32:42.242037       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0429 00:32:42.341898       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 00:32:42.342010       1 aggregator.go:165] initial CRD sync complete...
	I0429 00:32:42.342036       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 00:32:42.342058       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 00:32:42.342080       1 cache.go:39] Caches are synced for autoregister controller
	I0429 00:32:42.383495       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 00:32:42.383566       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 00:32:42.383662       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 00:32:42.384256       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 00:32:42.384429       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 00:32:42.384490       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 00:32:42.385984       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 00:32:42.388812       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 00:32:42.389397       1 policy_source.go:224] refreshing policies
	I0429 00:32:42.393051       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 00:32:42.398517       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 00:32:43.220641       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 00:32:44.612416       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:32:44.751580       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 00:32:44.776658       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:32:44.846204       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 00:32:44.859296       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 00:32:55.753740       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:32:55.854015       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f086e122efd0a79db5726a580c8eb8fa99eae5ef2c1d677fb3aa2b679bfb2254] <==
	I0429 00:33:22.379583       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-061470-m02" podCIDRs=["10.244.1.0/24"]
	I0429 00:33:24.239904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.667µs"
	I0429 00:33:24.281093       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.956µs"
	I0429 00:33:24.290783       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.642µs"
	I0429 00:33:24.325949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="157.337µs"
	I0429 00:33:24.330179       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.603µs"
	I0429 00:33:24.333161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.001µs"
	I0429 00:33:25.362502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.47µs"
	I0429 00:33:31.654711       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:33:31.683734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.846µs"
	I0429 00:33:31.705115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.434µs"
	I0429 00:33:34.748728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.043475ms"
	I0429 00:33:34.748891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.24µs"
	I0429 00:33:51.426192       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:33:52.693138       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-061470-m03\" does not exist"
	I0429 00:33:52.694044       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:33:52.707975       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-061470-m03" podCIDRs=["10.244.2.0/24"]
	I0429 00:34:02.090779       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:34:07.937911       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:34:45.719713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.533386ms"
	I0429 00:34:45.723395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.861µs"
	I0429 00:35:15.572924       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8zgdq"
	I0429 00:35:15.600144       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8zgdq"
	I0429 00:35:15.600192       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-cjx8c"
	I0429 00:35:15.623912       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-cjx8c"
	
	
	==> kube-controller-manager [feb59e1dcd4cb4503f2c1a202c23182f07f9fe1b8deb85a182679de1241e3c23] <==
	I0429 00:27:12.100672       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-061470-m02\" does not exist"
	I0429 00:27:12.113236       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-061470-m02" podCIDRs=["10.244.1.0/24"]
	I0429 00:27:15.256280       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-061470-m02"
	I0429 00:27:22.316721       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:27:24.832159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.960936ms"
	I0429 00:27:24.851904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.586454ms"
	I0429 00:27:24.854166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="209.85µs"
	I0429 00:27:24.855878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.437µs"
	I0429 00:27:28.927035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.822473ms"
	I0429 00:27:28.927285       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.113µs"
	I0429 00:27:29.745165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.991057ms"
	I0429 00:27:29.745407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.182µs"
	I0429 00:28:04.800312       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-061470-m03\" does not exist"
	I0429 00:28:04.800632       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:28:04.817108       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-061470-m03" podCIDRs=["10.244.2.0/24"]
	I0429 00:28:05.273806       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-061470-m03"
	I0429 00:28:14.562249       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:28:45.677647       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:28:46.692491       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-061470-m03\" does not exist"
	I0429 00:28:46.692618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:28:46.707556       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-061470-m03" podCIDRs=["10.244.3.0/24"]
	I0429 00:28:55.851414       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m02"
	I0429 00:29:35.328662       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-061470-m03"
	I0429 00:29:35.396059       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.317012ms"
	I0429 00:29:35.396256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.501µs"
	
	
	==> kube-proxy [9ea0e03bd31c39cdc62d68ffb54d9b093e1972a03f177dd3044c678276c372b8] <==
	I0429 00:32:43.307656       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:32:43.347354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.59"]
	I0429 00:32:43.472031       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:32:43.472085       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:32:43.472102       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:32:43.479260       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:32:43.480102       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:32:43.480147       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:32:43.481046       1 config.go:192] "Starting service config controller"
	I0429 00:32:43.481103       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:32:43.481139       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:32:43.481170       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:32:43.481714       1 config.go:319] "Starting node config controller"
	I0429 00:32:43.481748       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:32:43.581875       1 shared_informer.go:320] Caches are synced for node config
	I0429 00:32:43.581989       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:32:43.581998       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [b61b00d21f43eb1c5d8a33dc36b359f8e57957ca0a798bd041fd327e2ccfa059] <==
	I0429 00:26:07.691957       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:26:07.711121       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.59"]
	I0429 00:26:07.775449       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:26:07.775506       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:26:07.775525       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:26:07.778656       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:26:07.778914       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:26:07.778953       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:26:07.780528       1 config.go:192] "Starting service config controller"
	I0429 00:26:07.780571       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:26:07.780590       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:26:07.780594       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:26:07.782758       1 config.go:319] "Starting node config controller"
	I0429 00:26:07.782798       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:26:07.881373       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:26:07.881462       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:26:07.882942       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [42726d45ab665060d58b0ec625d7e61b8c4c797c9f574d7014ff557cd3b869b1] <==
	I0429 00:32:39.858657       1 serving.go:380] Generated self-signed cert in-memory
	W0429 00:32:42.257468       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 00:32:42.257518       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 00:32:42.257529       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 00:32:42.257537       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 00:32:42.310510       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 00:32:42.310651       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:32:42.314986       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 00:32:42.315031       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 00:32:42.315630       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 00:32:42.315733       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 00:32:42.415540       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [97d87b80717b470f27a8789e50942383fe9175badd1d96bb63480fb3ecb3e50f] <==
	E0429 00:25:49.572714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:25:49.572788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:25:49.572880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:25:49.572919       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:25:49.572891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:25:49.572989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:25:49.573804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:25:49.573101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 00:25:49.574568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 00:25:50.586460       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 00:25:50.586526       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 00:25:50.713787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 00:25:50.713916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 00:25:50.723597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 00:25:50.723780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 00:25:50.772022       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 00:25:50.772162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:25:50.780164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 00:25:50.780287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 00:25:50.807125       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:25:50.807212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:25:50.808086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:25:50.808153       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 00:25:53.755634       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 00:31:02.586814       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.466463    3071 topology_manager.go:215] "Topology Admit Handler" podUID="e8ab0204-4bf4-4426-9b38-b80b01ddccec" podNamespace="kube-system" podName="kindnet-zqmjk"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.466875    3071 topology_manager.go:215] "Topology Admit Handler" podUID="2e05361a-9929-4b79-988b-c81f3e3063bf" podNamespace="kube-system" podName="kube-proxy-4xgkq"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.466998    3071 topology_manager.go:215] "Topology Admit Handler" podUID="02c11dff-48e7-4ee6-b95a-ff6d46ecd635" podNamespace="default" podName="busybox-fc5497c4f-hbcvz"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.485216    3071 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490660    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e05361a-9929-4b79-988b-c81f3e3063bf-xtables-lock\") pod \"kube-proxy-4xgkq\" (UID: \"2e05361a-9929-4b79-988b-c81f3e3063bf\") " pod="kube-system/kube-proxy-4xgkq"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490723    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/313d1824-ed50-4033-8c64-33d4dc4b23a5-tmp\") pod \"storage-provisioner\" (UID: \"313d1824-ed50-4033-8c64-33d4dc4b23a5\") " pod="kube-system/storage-provisioner"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490761    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8ab0204-4bf4-4426-9b38-b80b01ddccec-lib-modules\") pod \"kindnet-zqmjk\" (UID: \"e8ab0204-4bf4-4426-9b38-b80b01ddccec\") " pod="kube-system/kindnet-zqmjk"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490784    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e05361a-9929-4b79-988b-c81f3e3063bf-lib-modules\") pod \"kube-proxy-4xgkq\" (UID: \"2e05361a-9929-4b79-988b-c81f3e3063bf\") " pod="kube-system/kube-proxy-4xgkq"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490865    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e8ab0204-4bf4-4426-9b38-b80b01ddccec-cni-cfg\") pod \"kindnet-zqmjk\" (UID: \"e8ab0204-4bf4-4426-9b38-b80b01ddccec\") " pod="kube-system/kindnet-zqmjk"
	Apr 29 00:32:42 multinode-061470 kubelet[3071]: I0429 00:32:42.490915    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8ab0204-4bf4-4426-9b38-b80b01ddccec-xtables-lock\") pod \"kindnet-zqmjk\" (UID: \"e8ab0204-4bf4-4426-9b38-b80b01ddccec\") " pod="kube-system/kindnet-zqmjk"
	Apr 29 00:33:37 multinode-061470 kubelet[3071]: E0429 00:33:37.558337    3071 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:33:37 multinode-061470 kubelet[3071]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:33:37 multinode-061470 kubelet[3071]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:33:37 multinode-061470 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:33:37 multinode-061470 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:34:37 multinode-061470 kubelet[3071]: E0429 00:34:37.559080    3071 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:34:37 multinode-061470 kubelet[3071]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:34:37 multinode-061470 kubelet[3071]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:34:37 multinode-061470 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:34:37 multinode-061470 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 00:35:37 multinode-061470 kubelet[3071]: E0429 00:35:37.558561    3071 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 00:35:37 multinode-061470 kubelet[3071]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 00:35:37 multinode-061470 kubelet[3071]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 00:35:37 multinode-061470 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 00:35:37 multinode-061470 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 00:36:28.792426   56576 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17977-13393/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-061470 -n multinode-061470
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-061470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.46s)

                                                
                                    
x
+
TestPreload (277.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-199892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0429 00:40:48.629286   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-199892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m14.757795267s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-199892 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-199892 image pull gcr.io/k8s-minikube/busybox: (3.036221807s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-199892
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-199892: exit status 82 (2m0.478848291s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-199892"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-199892 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-04-29 00:44:45.136415344 +0000 UTC m=+5855.729276210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-199892 -n test-preload-199892
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-199892 -n test-preload-199892: exit status 3 (18.61854223s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 00:45:03.750438   59424 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.157:22: connect: no route to host
	E0429 00:45:03.750462   59424 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.157:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-199892" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-199892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-199892
--- FAIL: TestPreload (277.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (453.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-219055 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-219055 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m29.130718245s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-219055] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-219055" primary control-plane node in "kubernetes-upgrade-219055" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:47:47.616885   61110 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:47:47.617145   61110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:47:47.617155   61110 out.go:304] Setting ErrFile to fd 2...
	I0429 00:47:47.617159   61110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:47:47.617349   61110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:47:47.617876   61110 out.go:298] Setting JSON to false
	I0429 00:47:47.618771   61110 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9012,"bootTime":1714342656,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 00:47:47.618821   61110 start.go:139] virtualization: kvm guest
	I0429 00:47:47.621009   61110 out.go:177] * [kubernetes-upgrade-219055] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 00:47:47.622345   61110 out.go:177]   - MINIKUBE_LOCATION=17977
	I0429 00:47:47.623545   61110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 00:47:47.622400   61110 notify.go:220] Checking for updates...
	I0429 00:47:47.626011   61110 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0429 00:47:47.627274   61110 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:47:47.628511   61110 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 00:47:47.629637   61110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 00:47:47.631151   61110 config.go:182] Loaded profile config "NoKubernetes-069355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:47:47.631248   61110 config.go:182] Loaded profile config "offline-crio-047422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:47:47.631326   61110 config.go:182] Loaded profile config "running-upgrade-127682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0429 00:47:47.631413   61110 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 00:47:47.664968   61110 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 00:47:47.666164   61110 start.go:297] selected driver: kvm2
	I0429 00:47:47.666175   61110 start.go:901] validating driver "kvm2" against <nil>
	I0429 00:47:47.666190   61110 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 00:47:47.666997   61110 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:47:47.667067   61110 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 00:47:47.681162   61110 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 00:47:47.681199   61110 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 00:47:47.681384   61110 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 00:47:47.681433   61110 cni.go:84] Creating CNI manager for ""
	I0429 00:47:47.681445   61110 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 00:47:47.681451   61110 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 00:47:47.681499   61110 start.go:340] cluster config:
	{Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-219055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:47:47.681581   61110 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:47:47.683252   61110 out.go:177] * Starting "kubernetes-upgrade-219055" primary control-plane node in "kubernetes-upgrade-219055" cluster
	I0429 00:47:47.684494   61110 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 00:47:47.684521   61110 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0429 00:47:47.684532   61110 cache.go:56] Caching tarball of preloaded images
	I0429 00:47:47.684602   61110 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 00:47:47.684612   61110 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0429 00:47:47.684696   61110 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/config.json ...
	I0429 00:47:47.684711   61110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/config.json: {Name:mk2c19d526c9a056c31ba6063d22103f2934b876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:47:47.684825   61110 start.go:360] acquireMachinesLock for kubernetes-upgrade-219055: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 00:48:43.442926   61110 start.go:364] duration metric: took 55.758072167s to acquireMachinesLock for "kubernetes-upgrade-219055"
	I0429 00:48:43.443018   61110 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-219055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 00:48:43.443156   61110 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 00:48:43.444843   61110 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 00:48:43.445050   61110 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:48:43.445099   61110 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:48:43.461358   61110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
	I0429 00:48:43.461779   61110 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:48:43.462358   61110 main.go:141] libmachine: Using API Version  1
	I0429 00:48:43.462382   61110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:48:43.462764   61110 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:48:43.462989   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetMachineName
	I0429 00:48:43.463116   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:48:43.463279   61110 start.go:159] libmachine.API.Create for "kubernetes-upgrade-219055" (driver="kvm2")
	I0429 00:48:43.463310   61110 client.go:168] LocalClient.Create starting
	I0429 00:48:43.463339   61110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem
	I0429 00:48:43.463367   61110 main.go:141] libmachine: Decoding PEM data...
	I0429 00:48:43.463391   61110 main.go:141] libmachine: Parsing certificate...
	I0429 00:48:43.463443   61110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem
	I0429 00:48:43.463468   61110 main.go:141] libmachine: Decoding PEM data...
	I0429 00:48:43.463488   61110 main.go:141] libmachine: Parsing certificate...
	I0429 00:48:43.463518   61110 main.go:141] libmachine: Running pre-create checks...
	I0429 00:48:43.463533   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .PreCreateCheck
	I0429 00:48:43.463907   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetConfigRaw
	I0429 00:48:43.464334   61110 main.go:141] libmachine: Creating machine...
	I0429 00:48:43.464350   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .Create
	I0429 00:48:43.464466   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Creating KVM machine...
	I0429 00:48:43.465619   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found existing default KVM network
	I0429 00:48:43.466699   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:43.466545   61906 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:8f:e4:98} reservation:<nil>}
	I0429 00:48:43.467779   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:43.467697   61906 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002626f0}
	I0429 00:48:43.467821   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | created network xml: 
	I0429 00:48:43.467840   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | <network>
	I0429 00:48:43.467863   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG |   <name>mk-kubernetes-upgrade-219055</name>
	I0429 00:48:43.467885   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG |   <dns enable='no'/>
	I0429 00:48:43.467897   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG |   
	I0429 00:48:43.467910   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0429 00:48:43.467926   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG |     <dhcp>
	I0429 00:48:43.467941   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0429 00:48:43.467953   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG |     </dhcp>
	I0429 00:48:43.467964   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG |   </ip>
	I0429 00:48:43.467975   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG |   
	I0429 00:48:43.467987   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | </network>
	I0429 00:48:43.468002   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | 
	I0429 00:48:43.472918   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | trying to create private KVM network mk-kubernetes-upgrade-219055 192.168.50.0/24...
	I0429 00:48:43.551064   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | private KVM network mk-kubernetes-upgrade-219055 192.168.50.0/24 created
	I0429 00:48:43.551109   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:43.551033   61906 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:48:43.551129   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Setting up store path in /home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055 ...
	I0429 00:48:43.551141   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Building disk image from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 00:48:43.551158   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Downloading /home/jenkins/minikube-integration/17977-13393/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 00:48:43.782261   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:43.782118   61906 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/id_rsa...
	I0429 00:48:43.889971   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:43.889827   61906 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/kubernetes-upgrade-219055.rawdisk...
	I0429 00:48:43.890002   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Writing magic tar header
	I0429 00:48:43.890037   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Writing SSH key tar header
	I0429 00:48:43.890063   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:43.889936   61906 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055 ...
	I0429 00:48:43.890075   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055
	I0429 00:48:43.890091   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055 (perms=drwx------)
	I0429 00:48:43.890098   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines
	I0429 00:48:43.890107   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:48:43.890124   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393
	I0429 00:48:43.890163   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines (perms=drwxr-xr-x)
	I0429 00:48:43.890202   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 00:48:43.890216   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube (perms=drwxr-xr-x)
	I0429 00:48:43.890232   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393 (perms=drwxrwxr-x)
	I0429 00:48:43.890246   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Checking permissions on dir: /home/jenkins
	I0429 00:48:43.890271   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Checking permissions on dir: /home
	I0429 00:48:43.890285   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Skipping /home - not owner
	I0429 00:48:43.890304   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 00:48:43.890318   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 00:48:43.890332   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Creating domain...
	I0429 00:48:43.891419   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) define libvirt domain using xml: 
	I0429 00:48:43.891439   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) <domain type='kvm'>
	I0429 00:48:43.891447   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   <name>kubernetes-upgrade-219055</name>
	I0429 00:48:43.891456   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   <memory unit='MiB'>2200</memory>
	I0429 00:48:43.891465   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   <vcpu>2</vcpu>
	I0429 00:48:43.891471   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   <features>
	I0429 00:48:43.891481   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <acpi/>
	I0429 00:48:43.891486   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <apic/>
	I0429 00:48:43.891501   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <pae/>
	I0429 00:48:43.891508   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     
	I0429 00:48:43.891514   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   </features>
	I0429 00:48:43.891524   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   <cpu mode='host-passthrough'>
	I0429 00:48:43.891535   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   
	I0429 00:48:43.891541   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   </cpu>
	I0429 00:48:43.891547   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   <os>
	I0429 00:48:43.891554   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <type>hvm</type>
	I0429 00:48:43.891560   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <boot dev='cdrom'/>
	I0429 00:48:43.891567   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <boot dev='hd'/>
	I0429 00:48:43.891573   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <bootmenu enable='no'/>
	I0429 00:48:43.891580   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   </os>
	I0429 00:48:43.891585   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   <devices>
	I0429 00:48:43.891596   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <disk type='file' device='cdrom'>
	I0429 00:48:43.891608   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/boot2docker.iso'/>
	I0429 00:48:43.891616   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <target dev='hdc' bus='scsi'/>
	I0429 00:48:43.891624   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <readonly/>
	I0429 00:48:43.891631   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     </disk>
	I0429 00:48:43.891644   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <disk type='file' device='disk'>
	I0429 00:48:43.891653   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 00:48:43.891684   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/kubernetes-upgrade-219055.rawdisk'/>
	I0429 00:48:43.891707   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <target dev='hda' bus='virtio'/>
	I0429 00:48:43.891722   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     </disk>
	I0429 00:48:43.891732   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <interface type='network'>
	I0429 00:48:43.891752   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <source network='mk-kubernetes-upgrade-219055'/>
	I0429 00:48:43.891765   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <model type='virtio'/>
	I0429 00:48:43.891771   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     </interface>
	I0429 00:48:43.891775   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <interface type='network'>
	I0429 00:48:43.891781   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <source network='default'/>
	I0429 00:48:43.891789   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <model type='virtio'/>
	I0429 00:48:43.891806   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     </interface>
	I0429 00:48:43.891814   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <serial type='pty'>
	I0429 00:48:43.891820   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <target port='0'/>
	I0429 00:48:43.891827   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     </serial>
	I0429 00:48:43.891832   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <console type='pty'>
	I0429 00:48:43.891840   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <target type='serial' port='0'/>
	I0429 00:48:43.891850   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     </console>
	I0429 00:48:43.891861   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     <rng model='virtio'>
	I0429 00:48:43.891867   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)       <backend model='random'>/dev/random</backend>
	I0429 00:48:43.891877   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     </rng>
	I0429 00:48:43.891882   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     
	I0429 00:48:43.891890   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)     
	I0429 00:48:43.891910   61110 main.go:141] libmachine: (kubernetes-upgrade-219055)   </devices>
	I0429 00:48:43.891930   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) </domain>
	I0429 00:48:43.891944   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) 
	I0429 00:48:43.896608   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:12:12:6c in network default
	I0429 00:48:43.897239   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Ensuring networks are active...
	I0429 00:48:43.897266   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:43.897897   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Ensuring network default is active
	I0429 00:48:43.898236   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Ensuring network mk-kubernetes-upgrade-219055 is active
	I0429 00:48:43.898745   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Getting domain xml...
	I0429 00:48:43.899494   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Creating domain...
	I0429 00:48:45.124488   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Waiting to get IP...
	I0429 00:48:45.125238   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:45.125670   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:45.125707   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:45.125641   61906 retry.go:31] will retry after 232.507335ms: waiting for machine to come up
	I0429 00:48:45.360399   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:45.360945   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:45.360978   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:45.360911   61906 retry.go:31] will retry after 262.711117ms: waiting for machine to come up
	I0429 00:48:45.626450   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:45.626930   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:45.626959   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:45.626891   61906 retry.go:31] will retry after 313.615084ms: waiting for machine to come up
	I0429 00:48:45.942338   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:45.942943   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:45.942977   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:45.942889   61906 retry.go:31] will retry after 527.346672ms: waiting for machine to come up
	I0429 00:48:46.472314   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:46.472788   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:46.472825   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:46.472728   61906 retry.go:31] will retry after 717.166049ms: waiting for machine to come up
	I0429 00:48:47.191816   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:47.192314   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:47.192359   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:47.192267   61906 retry.go:31] will retry after 649.477336ms: waiting for machine to come up
	I0429 00:48:47.843212   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:47.843756   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:47.843783   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:47.843730   61906 retry.go:31] will retry after 840.910057ms: waiting for machine to come up
	I0429 00:48:48.685670   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:48.686188   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:48.686222   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:48.686113   61906 retry.go:31] will retry after 1.440290191s: waiting for machine to come up
	I0429 00:48:50.128923   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:50.129360   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:50.129388   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:50.129322   61906 retry.go:31] will retry after 1.231021999s: waiting for machine to come up
	I0429 00:48:51.362282   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:51.362694   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:51.362724   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:51.362656   61906 retry.go:31] will retry after 2.193503026s: waiting for machine to come up
	I0429 00:48:53.557644   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:53.558169   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:53.558199   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:53.558137   61906 retry.go:31] will retry after 1.925933573s: waiting for machine to come up
	I0429 00:48:55.486258   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:55.486789   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:55.486828   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:55.486699   61906 retry.go:31] will retry after 2.646588177s: waiting for machine to come up
	I0429 00:48:58.135073   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:48:58.135606   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:48:58.135638   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:48:58.135555   61906 retry.go:31] will retry after 2.924316618s: waiting for machine to come up
	I0429 00:49:01.061652   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:01.062146   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find current IP address of domain kubernetes-upgrade-219055 in network mk-kubernetes-upgrade-219055
	I0429 00:49:01.062183   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | I0429 00:49:01.062100   61906 retry.go:31] will retry after 5.568674117s: waiting for machine to come up
	I0429 00:49:06.632621   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:06.633119   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has current primary IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:06.633160   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Found IP for machine: 192.168.50.69
	I0429 00:49:06.633184   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Reserving static IP address...
	I0429 00:49:06.633621   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-219055", mac: "52:54:00:b1:36:0e", ip: "192.168.50.69"} in network mk-kubernetes-upgrade-219055
	I0429 00:49:06.707889   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Getting to WaitForSSH function...
	I0429 00:49:06.707927   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Reserved static IP address: 192.168.50.69
	I0429 00:49:06.707942   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Waiting for SSH to be available...
	I0429 00:49:06.710362   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:06.710712   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:06.710741   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:06.710858   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Using SSH client type: external
	I0429 00:49:06.710987   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/id_rsa (-rw-------)
	I0429 00:49:06.711025   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 00:49:06.711048   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | About to run SSH command:
	I0429 00:49:06.711065   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | exit 0
	I0429 00:49:06.842599   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | SSH cmd err, output: <nil>: 
	I0429 00:49:06.842825   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) KVM machine creation complete!
	I0429 00:49:06.843157   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetConfigRaw
	I0429 00:49:06.843675   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:49:06.843887   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:49:06.844066   61110 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 00:49:06.844080   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetState
	I0429 00:49:06.845272   61110 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 00:49:06.845288   61110 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 00:49:06.845295   61110 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 00:49:06.845301   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:49:06.847571   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:06.847954   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:06.847989   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:06.848139   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:49:06.848312   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:06.848444   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:06.848561   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:49:06.848715   61110 main.go:141] libmachine: Using SSH client type: native
	I0429 00:49:06.848906   61110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I0429 00:49:06.848918   61110 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 00:49:06.961491   61110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:49:06.961516   61110 main.go:141] libmachine: Detecting the provisioner...
	I0429 00:49:06.961524   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:49:06.964456   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:06.964880   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:06.964905   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:06.965035   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:49:06.965236   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:06.965414   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:06.965611   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:49:06.965788   61110 main.go:141] libmachine: Using SSH client type: native
	I0429 00:49:06.965981   61110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I0429 00:49:06.965995   61110 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 00:49:07.079729   61110 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 00:49:07.079858   61110 main.go:141] libmachine: found compatible host: buildroot
	I0429 00:49:07.079877   61110 main.go:141] libmachine: Provisioning with buildroot...
	I0429 00:49:07.079890   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetMachineName
	I0429 00:49:07.080141   61110 buildroot.go:166] provisioning hostname "kubernetes-upgrade-219055"
	I0429 00:49:07.080186   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetMachineName
	I0429 00:49:07.080380   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:49:07.083396   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.083757   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:07.083786   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.083967   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:49:07.084168   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:07.084361   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:07.084490   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:49:07.084640   61110 main.go:141] libmachine: Using SSH client type: native
	I0429 00:49:07.084867   61110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I0429 00:49:07.084882   61110 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-219055 && echo "kubernetes-upgrade-219055" | sudo tee /etc/hostname
	I0429 00:49:07.216429   61110 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-219055
	
	I0429 00:49:07.216465   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:49:07.219780   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.220199   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:07.220234   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.220423   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:49:07.220664   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:07.220859   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:07.221011   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:49:07.221200   61110 main.go:141] libmachine: Using SSH client type: native
	I0429 00:49:07.221409   61110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I0429 00:49:07.221429   61110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-219055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-219055/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-219055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 00:49:07.349441   61110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:49:07.349477   61110 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0429 00:49:07.349531   61110 buildroot.go:174] setting up certificates
	I0429 00:49:07.349555   61110 provision.go:84] configureAuth start
	I0429 00:49:07.349579   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetMachineName
	I0429 00:49:07.349893   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetIP
	I0429 00:49:07.352802   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.353183   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:07.353216   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.353401   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:49:07.355949   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.356285   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:07.356315   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.356464   61110 provision.go:143] copyHostCerts
	I0429 00:49:07.356519   61110 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0429 00:49:07.356553   61110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:49:07.356607   61110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0429 00:49:07.356720   61110 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0429 00:49:07.356732   61110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:49:07.356761   61110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0429 00:49:07.356831   61110 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0429 00:49:07.356843   61110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:49:07.356868   61110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0429 00:49:07.356929   61110 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-219055 san=[127.0.0.1 192.168.50.69 kubernetes-upgrade-219055 localhost minikube]
	I0429 00:49:07.511413   61110 provision.go:177] copyRemoteCerts
	I0429 00:49:07.511467   61110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 00:49:07.511490   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:49:07.514525   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.514932   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:07.514963   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.515178   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:49:07.515370   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:07.515582   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:49:07.515736   61110 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/id_rsa Username:docker}
	I0429 00:49:07.605854   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 00:49:07.638242   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0429 00:49:07.673963   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 00:49:07.702221   61110 provision.go:87] duration metric: took 352.64231ms to configureAuth
	I0429 00:49:07.702255   61110 buildroot.go:189] setting minikube options for container-runtime
	I0429 00:49:07.702461   61110 config.go:182] Loaded profile config "kubernetes-upgrade-219055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 00:49:07.702552   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:49:07.705506   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.705909   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:07.705942   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:07.706125   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:49:07.706318   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:07.706548   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:07.706717   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:49:07.706981   61110 main.go:141] libmachine: Using SSH client type: native
	I0429 00:49:07.707163   61110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I0429 00:49:07.707180   61110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 00:49:08.008768   61110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 00:49:08.008793   61110 main.go:141] libmachine: Checking connection to Docker...
	I0429 00:49:08.008804   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetURL
	I0429 00:49:08.010069   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | Using libvirt version 6000000
	I0429 00:49:08.012413   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.012746   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:08.012776   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.012929   61110 main.go:141] libmachine: Docker is up and running!
	I0429 00:49:08.012947   61110 main.go:141] libmachine: Reticulating splines...
	I0429 00:49:08.012954   61110 client.go:171] duration metric: took 24.54963606s to LocalClient.Create
	I0429 00:49:08.012985   61110 start.go:167] duration metric: took 24.54970767s to libmachine.API.Create "kubernetes-upgrade-219055"
	I0429 00:49:08.012999   61110 start.go:293] postStartSetup for "kubernetes-upgrade-219055" (driver="kvm2")
	I0429 00:49:08.013011   61110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 00:49:08.013033   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:49:08.013308   61110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 00:49:08.013350   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:49:08.015765   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.016085   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:08.016120   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.016194   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:49:08.016386   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:08.016562   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:49:08.016738   61110 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/id_rsa Username:docker}
	I0429 00:49:08.101971   61110 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 00:49:08.106834   61110 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 00:49:08.106861   61110 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0429 00:49:08.106920   61110 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0429 00:49:08.107006   61110 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0429 00:49:08.107126   61110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 00:49:08.118450   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:49:08.144777   61110 start.go:296] duration metric: took 131.764902ms for postStartSetup
	I0429 00:49:08.144845   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetConfigRaw
	I0429 00:49:08.145489   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetIP
	I0429 00:49:08.148256   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.148610   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:08.148642   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.148876   61110 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/config.json ...
	I0429 00:49:08.149045   61110 start.go:128] duration metric: took 24.705873524s to createHost
	I0429 00:49:08.149066   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:49:08.151234   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.151553   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:08.151582   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.151684   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:49:08.151871   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:08.152024   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:08.152162   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:49:08.152315   61110 main.go:141] libmachine: Using SSH client type: native
	I0429 00:49:08.152538   61110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I0429 00:49:08.152549   61110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 00:49:08.267825   61110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714351748.250991969
	
	I0429 00:49:08.267846   61110 fix.go:216] guest clock: 1714351748.250991969
	I0429 00:49:08.267853   61110 fix.go:229] Guest: 2024-04-29 00:49:08.250991969 +0000 UTC Remote: 2024-04-29 00:49:08.14905657 +0000 UTC m=+80.577790835 (delta=101.935399ms)
	I0429 00:49:08.267877   61110 fix.go:200] guest clock delta is within tolerance: 101.935399ms
	I0429 00:49:08.267884   61110 start.go:83] releasing machines lock for "kubernetes-upgrade-219055", held for 24.824911381s
	I0429 00:49:08.267910   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:49:08.268178   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetIP
	I0429 00:49:08.271385   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.271795   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:08.271826   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.271974   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:49:08.272529   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:49:08.272689   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:49:08.272786   61110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 00:49:08.272826   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:49:08.272880   61110 ssh_runner.go:195] Run: cat /version.json
	I0429 00:49:08.272911   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:49:08.280680   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.281104   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:08.281131   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.281292   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.281313   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:49:08.281515   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:08.281669   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:49:08.281810   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:08.281844   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:08.281850   61110 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/id_rsa Username:docker}
	I0429 00:49:08.282007   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:49:08.282309   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:49:08.282494   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:49:08.282643   61110 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/id_rsa Username:docker}
	I0429 00:49:08.387892   61110 ssh_runner.go:195] Run: systemctl --version
	I0429 00:49:08.395276   61110 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 00:49:08.570301   61110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 00:49:08.579040   61110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 00:49:08.579127   61110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 00:49:08.597355   61110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 00:49:08.597377   61110 start.go:494] detecting cgroup driver to use...
	I0429 00:49:08.597446   61110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 00:49:08.620076   61110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 00:49:08.636641   61110 docker.go:217] disabling cri-docker service (if available) ...
	I0429 00:49:08.636692   61110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 00:49:08.652863   61110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 00:49:08.671654   61110 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 00:49:08.811513   61110 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 00:49:08.984073   61110 docker.go:233] disabling docker service ...
	I0429 00:49:08.984143   61110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 00:49:09.002450   61110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 00:49:09.019099   61110 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 00:49:09.193594   61110 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 00:49:09.361795   61110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 00:49:09.380492   61110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 00:49:09.405768   61110 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 00:49:09.405847   61110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:49:09.420016   61110 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 00:49:09.420075   61110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:49:09.433614   61110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:49:09.447760   61110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:49:09.461436   61110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 00:49:09.475283   61110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 00:49:09.490696   61110 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 00:49:09.490763   61110 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 00:49:09.514291   61110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 00:49:09.530279   61110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:49:09.701140   61110 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 00:49:09.918232   61110 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 00:49:09.918289   61110 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 00:49:09.924114   61110 start.go:562] Will wait 60s for crictl version
	I0429 00:49:09.924182   61110 ssh_runner.go:195] Run: which crictl
	I0429 00:49:09.928975   61110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 00:49:09.982930   61110 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 00:49:09.983025   61110 ssh_runner.go:195] Run: crio --version
	I0429 00:49:10.030603   61110 ssh_runner.go:195] Run: crio --version
	I0429 00:49:10.081028   61110 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 00:49:10.082699   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetIP
	I0429 00:49:10.085740   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:10.086164   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:49:10.086196   61110 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:49:10.086439   61110 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0429 00:49:10.091784   61110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 00:49:10.108484   61110 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-219055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 00:49:10.108584   61110 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 00:49:10.108641   61110 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:49:10.149991   61110 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 00:49:10.150130   61110 ssh_runner.go:195] Run: which lz4
	I0429 00:49:10.155424   61110 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0429 00:49:10.160737   61110 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 00:49:10.160768   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 00:49:12.431878   61110 crio.go:462] duration metric: took 2.276507433s to copy over tarball
	I0429 00:49:12.431949   61110 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 00:49:15.363457   61110 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.931459446s)
	I0429 00:49:15.363487   61110 crio.go:469] duration metric: took 2.931582474s to extract the tarball
	I0429 00:49:15.363495   61110 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 00:49:15.421611   61110 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:49:15.477786   61110 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 00:49:15.477821   61110 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 00:49:15.477903   61110 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 00:49:15.477921   61110 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 00:49:15.477943   61110 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 00:49:15.477921   61110 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 00:49:15.478037   61110 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 00:49:15.478088   61110 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 00:49:15.478208   61110 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 00:49:15.478385   61110 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 00:49:15.479851   61110 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 00:49:15.479871   61110 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 00:49:15.479882   61110 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 00:49:15.479889   61110 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 00:49:15.479890   61110 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 00:49:15.479919   61110 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 00:49:15.479932   61110 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 00:49:15.480036   61110 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 00:49:15.606495   61110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 00:49:15.626054   61110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 00:49:15.663960   61110 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 00:49:15.664023   61110 cri.go:232] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 00:49:15.664078   61110 ssh_runner.go:195] Run: which crictl
	I0429 00:49:15.688129   61110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 00:49:15.688345   61110 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 00:49:15.688383   61110 cri.go:232] Removing image: registry.k8s.io/pause:3.2
	I0429 00:49:15.688426   61110 ssh_runner.go:195] Run: which crictl
	I0429 00:49:15.719980   61110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 00:49:15.733796   61110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 00:49:15.733902   61110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 00:49:15.791752   61110 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 00:49:15.791789   61110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 00:49:15.791802   61110 cri.go:232] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 00:49:15.791854   61110 ssh_runner.go:195] Run: which crictl
	I0429 00:49:15.796864   61110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 00:49:15.820748   61110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 00:49:15.820888   61110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 00:49:15.833125   61110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 00:49:15.835316   61110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 00:49:15.857885   61110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 00:49:15.951213   61110 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 00:49:15.951257   61110 cri.go:232] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 00:49:15.951313   61110 ssh_runner.go:195] Run: which crictl
	I0429 00:49:15.976162   61110 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 00:49:15.976198   61110 cri.go:232] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 00:49:15.976245   61110 ssh_runner.go:195] Run: which crictl
	I0429 00:49:15.977895   61110 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 00:49:15.977938   61110 cri.go:232] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 00:49:15.977953   61110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 00:49:15.977972   61110 ssh_runner.go:195] Run: which crictl
	I0429 00:49:15.978097   61110 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 00:49:15.978130   61110 cri.go:232] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 00:49:15.978160   61110 ssh_runner.go:195] Run: which crictl
	I0429 00:49:15.981761   61110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 00:49:16.039044   61110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 00:49:16.039122   61110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 00:49:16.039169   61110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 00:49:16.039215   61110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 00:49:16.097091   61110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 00:49:16.097202   61110 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 00:49:16.434847   61110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 00:49:16.586146   61110 cache_images.go:92] duration metric: took 1.108306256s to LoadCachedImages
	W0429 00:49:16.586256   61110 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0429 00:49:16.586274   61110 kubeadm.go:928] updating node { 192.168.50.69 8443 v1.20.0 crio true true} ...
	I0429 00:49:16.586411   61110 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-219055 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-219055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 00:49:16.586503   61110 ssh_runner.go:195] Run: crio config
	I0429 00:49:16.648013   61110 cni.go:84] Creating CNI manager for ""
	I0429 00:49:16.648034   61110 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 00:49:16.648042   61110 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 00:49:16.648064   61110 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.69 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-219055 NodeName:kubernetes-upgrade-219055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 00:49:16.648184   61110 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-219055"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 00:49:16.648258   61110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 00:49:16.660225   61110 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 00:49:16.660310   61110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 00:49:16.671369   61110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0429 00:49:16.692107   61110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 00:49:16.712858   61110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0429 00:49:16.736996   61110 ssh_runner.go:195] Run: grep 192.168.50.69	control-plane.minikube.internal$ /etc/hosts
	I0429 00:49:16.742143   61110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 00:49:16.758097   61110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:49:16.911182   61110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 00:49:16.931954   61110 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055 for IP: 192.168.50.69
	I0429 00:49:16.931978   61110 certs.go:194] generating shared ca certs ...
	I0429 00:49:16.932007   61110 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:49:16.932182   61110 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0429 00:49:16.932246   61110 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0429 00:49:16.932261   61110 certs.go:256] generating profile certs ...
	I0429 00:49:16.932331   61110 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/client.key
	I0429 00:49:16.932361   61110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/client.crt with IP's: []
	I0429 00:49:17.264706   61110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/client.crt ...
	I0429 00:49:17.264734   61110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/client.crt: {Name:mkc7041deaab663fe22106c3e39ce81543aa615a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:49:17.264888   61110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/client.key ...
	I0429 00:49:17.264904   61110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/client.key: {Name:mkb3ba89aa50cfc4104009d7e1ebe6916ef625e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:49:17.264979   61110 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.key.752b27af
	I0429 00:49:17.264997   61110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.crt.752b27af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.69]
	I0429 00:49:17.496629   61110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.crt.752b27af ...
	I0429 00:49:17.496660   61110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.crt.752b27af: {Name:mk6dc57debdd1bebd02c5e8e7132047581fca7e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:49:17.496880   61110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.key.752b27af ...
	I0429 00:49:17.496905   61110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.key.752b27af: {Name:mka30af1f5b7286fc9deaa225d327a98f19835d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:49:17.497009   61110 certs.go:381] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.crt.752b27af -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.crt
	I0429 00:49:17.497106   61110 certs.go:385] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.key.752b27af -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.key
	I0429 00:49:17.497185   61110 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/proxy-client.key
	I0429 00:49:17.497206   61110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/proxy-client.crt with IP's: []
	I0429 00:49:17.593635   61110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/proxy-client.crt ...
	I0429 00:49:17.593664   61110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/proxy-client.crt: {Name:mk2684eed0ac6d2b7e9dafc79ec23d6eb9289f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:49:17.593869   61110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/proxy-client.key ...
	I0429 00:49:17.593894   61110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/proxy-client.key: {Name:mk933b2d8aa6da4721ba7349d7d2001985ab4ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:49:17.594128   61110 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0429 00:49:17.594179   61110 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0429 00:49:17.594192   61110 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 00:49:17.594224   61110 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0429 00:49:17.594263   61110 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0429 00:49:17.594285   61110 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0429 00:49:17.594322   61110 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:49:17.594894   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 00:49:17.634885   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 00:49:17.665029   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 00:49:17.698390   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 00:49:17.733551   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 00:49:17.766633   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 00:49:17.798954   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 00:49:17.825368   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 00:49:17.871496   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0429 00:49:17.907524   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0429 00:49:17.937852   61110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 00:49:17.969142   61110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 00:49:17.990636   61110 ssh_runner.go:195] Run: openssl version
	I0429 00:49:17.997590   61110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0429 00:49:18.012657   61110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0429 00:49:18.018437   61110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0429 00:49:18.018512   61110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0429 00:49:18.024993   61110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 00:49:18.040855   61110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 00:49:18.055646   61110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:49:18.062206   61110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:49:18.062290   61110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:49:18.069803   61110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 00:49:18.084720   61110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0429 00:49:18.100878   61110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0429 00:49:18.106807   61110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0429 00:49:18.106880   61110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0429 00:49:18.116280   61110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0429 00:49:18.131847   61110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 00:49:18.137076   61110 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 00:49:18.137143   61110 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-219055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:49:18.137231   61110 cri.go:56] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 00:49:18.137312   61110 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 00:49:18.185190   61110 cri.go:91] found id: ""
	I0429 00:49:18.185281   61110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 00:49:18.201522   61110 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 00:49:18.217505   61110 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 00:49:18.233585   61110 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 00:49:18.233612   61110 kubeadm.go:156] found existing configuration files:
	
	I0429 00:49:18.233669   61110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 00:49:18.254006   61110 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 00:49:18.254091   61110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 00:49:18.271705   61110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 00:49:18.283892   61110 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 00:49:18.283986   61110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 00:49:18.299648   61110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 00:49:18.314577   61110 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 00:49:18.314659   61110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 00:49:18.330111   61110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 00:49:18.345385   61110 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 00:49:18.345529   61110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 00:49:18.361297   61110 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 00:49:18.515113   61110 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 00:49:18.515309   61110 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 00:49:18.750802   61110 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 00:49:18.750978   61110 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 00:49:18.751115   61110 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 00:49:18.981528   61110 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 00:49:18.983811   61110 out.go:204]   - Generating certificates and keys ...
	I0429 00:49:18.983928   61110 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 00:49:18.984029   61110 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 00:49:19.203218   61110 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 00:49:19.401686   61110 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 00:49:19.642806   61110 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 00:49:19.898536   61110 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 00:49:20.211204   61110 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 00:49:20.211601   61110 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-219055 localhost] and IPs [192.168.50.69 127.0.0.1 ::1]
	I0429 00:49:20.310240   61110 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 00:49:20.310693   61110 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-219055 localhost] and IPs [192.168.50.69 127.0.0.1 ::1]
	I0429 00:49:20.487941   61110 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 00:49:20.608509   61110 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 00:49:20.696335   61110 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 00:49:20.696666   61110 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 00:49:20.805742   61110 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 00:49:20.945814   61110 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 00:49:21.186836   61110 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 00:49:21.715884   61110 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 00:49:21.743822   61110 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 00:49:21.744987   61110 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 00:49:21.745083   61110 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 00:49:21.890190   61110 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 00:49:21.892447   61110 out.go:204]   - Booting up control plane ...
	I0429 00:49:21.892571   61110 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 00:49:21.901855   61110 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 00:49:21.903511   61110 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 00:49:21.904646   61110 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 00:49:21.923148   61110 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 00:50:01.920756   61110 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 00:50:01.920881   61110 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 00:50:01.921110   61110 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 00:50:06.922040   61110 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 00:50:06.922319   61110 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 00:50:16.923031   61110 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 00:50:16.923291   61110 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 00:50:36.924736   61110 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 00:50:36.925013   61110 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 00:51:16.924655   61110 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 00:51:16.925289   61110 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 00:51:16.925308   61110 kubeadm.go:309] 
	I0429 00:51:16.925451   61110 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 00:51:16.925558   61110 kubeadm.go:309] 		timed out waiting for the condition
	I0429 00:51:16.925568   61110 kubeadm.go:309] 
	I0429 00:51:16.925664   61110 kubeadm.go:309] 	This error is likely caused by:
	I0429 00:51:16.925743   61110 kubeadm.go:309] 		- The kubelet is not running
	I0429 00:51:16.926028   61110 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 00:51:16.926064   61110 kubeadm.go:309] 
	I0429 00:51:16.926358   61110 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 00:51:16.926472   61110 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 00:51:16.926566   61110 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 00:51:16.926577   61110 kubeadm.go:309] 
	I0429 00:51:16.926850   61110 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 00:51:16.927138   61110 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 00:51:16.927162   61110 kubeadm.go:309] 
	I0429 00:51:16.927417   61110 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 00:51:16.927702   61110 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 00:51:16.927901   61110 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 00:51:16.928074   61110 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 00:51:16.928113   61110 kubeadm.go:309] 
	I0429 00:51:16.928345   61110 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 00:51:16.928861   61110 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 00:51:16.928967   61110 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0429 00:51:16.929164   61110 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-219055 localhost] and IPs [192.168.50.69 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-219055 localhost] and IPs [192.168.50.69 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-219055 localhost] and IPs [192.168.50.69 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-219055 localhost] and IPs [192.168.50.69 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0429 00:51:16.929208   61110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 00:51:19.450466   61110 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.521232852s)
	I0429 00:51:19.450542   61110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:51:19.465276   61110 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 00:51:19.476031   61110 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 00:51:19.476060   61110 kubeadm.go:156] found existing configuration files:
	
	I0429 00:51:19.476116   61110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 00:51:19.486524   61110 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 00:51:19.486612   61110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 00:51:19.497409   61110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 00:51:19.507625   61110 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 00:51:19.507699   61110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 00:51:19.518076   61110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 00:51:19.527981   61110 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 00:51:19.528044   61110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 00:51:19.537904   61110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 00:51:19.547375   61110 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 00:51:19.547423   61110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 00:51:19.557174   61110 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 00:51:19.631712   61110 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 00:51:19.632200   61110 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 00:51:19.794089   61110 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 00:51:19.794207   61110 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 00:51:19.794331   61110 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 00:51:19.989742   61110 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 00:51:19.992035   61110 out.go:204]   - Generating certificates and keys ...
	I0429 00:51:19.992157   61110 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 00:51:19.992239   61110 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 00:51:19.992364   61110 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 00:51:19.992422   61110 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 00:51:19.992532   61110 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 00:51:19.992598   61110 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 00:51:19.992688   61110 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 00:51:19.992821   61110 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 00:51:19.993477   61110 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 00:51:19.993855   61110 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 00:51:19.993917   61110 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 00:51:19.994012   61110 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 00:51:20.230343   61110 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 00:51:20.553171   61110 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 00:51:20.729934   61110 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 00:51:20.861903   61110 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 00:51:20.879012   61110 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 00:51:20.880186   61110 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 00:51:20.880278   61110 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 00:51:21.026498   61110 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 00:51:21.028752   61110 out.go:204]   - Booting up control plane ...
	I0429 00:51:21.028895   61110 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 00:51:21.037893   61110 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 00:51:21.039266   61110 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 00:51:21.040220   61110 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 00:51:21.050286   61110 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 00:52:01.052615   61110 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 00:52:01.052736   61110 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 00:52:01.052933   61110 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 00:52:06.053246   61110 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 00:52:06.053528   61110 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 00:52:16.055151   61110 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 00:52:16.055439   61110 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 00:52:36.056457   61110 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 00:52:36.056748   61110 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 00:53:16.056417   61110 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 00:53:16.056638   61110 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 00:53:16.056649   61110 kubeadm.go:309] 
	I0429 00:53:16.056709   61110 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 00:53:16.056776   61110 kubeadm.go:309] 		timed out waiting for the condition
	I0429 00:53:16.056790   61110 kubeadm.go:309] 
	I0429 00:53:16.056834   61110 kubeadm.go:309] 	This error is likely caused by:
	I0429 00:53:16.056891   61110 kubeadm.go:309] 		- The kubelet is not running
	I0429 00:53:16.057020   61110 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 00:53:16.057034   61110 kubeadm.go:309] 
	I0429 00:53:16.057163   61110 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 00:53:16.057205   61110 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 00:53:16.057238   61110 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 00:53:16.057245   61110 kubeadm.go:309] 
	I0429 00:53:16.057331   61110 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 00:53:16.057400   61110 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 00:53:16.057407   61110 kubeadm.go:309] 
	I0429 00:53:16.057496   61110 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 00:53:16.057580   61110 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 00:53:16.057673   61110 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 00:53:16.057785   61110 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 00:53:16.057797   61110 kubeadm.go:309] 
	I0429 00:53:16.059148   61110 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 00:53:16.059267   61110 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 00:53:16.059398   61110 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 00:53:16.059462   61110 kubeadm.go:393] duration metric: took 3m57.922321243s to StartCluster
	I0429 00:53:16.059524   61110 cri.go:56] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 00:53:16.059658   61110 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 00:53:16.109803   61110 cri.go:91] found id: ""
	I0429 00:53:16.109832   61110 logs.go:276] 0 containers: []
	W0429 00:53:16.109840   61110 logs.go:278] No container was found matching "kube-apiserver"
	I0429 00:53:16.109845   61110 cri.go:56] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 00:53:16.109901   61110 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 00:53:16.146986   61110 cri.go:91] found id: ""
	I0429 00:53:16.147011   61110 logs.go:276] 0 containers: []
	W0429 00:53:16.147022   61110 logs.go:278] No container was found matching "etcd"
	I0429 00:53:16.147028   61110 cri.go:56] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 00:53:16.147073   61110 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 00:53:16.183829   61110 cri.go:91] found id: ""
	I0429 00:53:16.183859   61110 logs.go:276] 0 containers: []
	W0429 00:53:16.183867   61110 logs.go:278] No container was found matching "coredns"
	I0429 00:53:16.183873   61110 cri.go:56] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 00:53:16.183927   61110 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 00:53:16.220447   61110 cri.go:91] found id: ""
	I0429 00:53:16.220475   61110 logs.go:276] 0 containers: []
	W0429 00:53:16.220485   61110 logs.go:278] No container was found matching "kube-scheduler"
	I0429 00:53:16.220492   61110 cri.go:56] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 00:53:16.220549   61110 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 00:53:16.257964   61110 cri.go:91] found id: ""
	I0429 00:53:16.257992   61110 logs.go:276] 0 containers: []
	W0429 00:53:16.258001   61110 logs.go:278] No container was found matching "kube-proxy"
	I0429 00:53:16.258008   61110 cri.go:56] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 00:53:16.258075   61110 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 00:53:16.299329   61110 cri.go:91] found id: ""
	I0429 00:53:16.299357   61110 logs.go:276] 0 containers: []
	W0429 00:53:16.299365   61110 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 00:53:16.299371   61110 cri.go:56] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 00:53:16.299442   61110 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 00:53:16.341947   61110 cri.go:91] found id: ""
	I0429 00:53:16.341981   61110 logs.go:276] 0 containers: []
	W0429 00:53:16.341995   61110 logs.go:278] No container was found matching "kindnet"
	I0429 00:53:16.342008   61110 logs.go:123] Gathering logs for kubelet ...
	I0429 00:53:16.342038   61110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 00:53:16.395568   61110 logs.go:123] Gathering logs for dmesg ...
	I0429 00:53:16.395596   61110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 00:53:16.412713   61110 logs.go:123] Gathering logs for describe nodes ...
	I0429 00:53:16.412747   61110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 00:53:16.542297   61110 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 00:53:16.542324   61110 logs.go:123] Gathering logs for CRI-O ...
	I0429 00:53:16.542346   61110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 00:53:16.643131   61110 logs.go:123] Gathering logs for container status ...
	I0429 00:53:16.643166   61110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0429 00:53:16.685471   61110 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0429 00:53:16.685513   61110 out.go:239] * 
	* 
	W0429 00:53:16.685570   61110 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 00:53:16.685601   61110 out.go:239] * 
	* 
	W0429 00:53:16.686631   61110 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 00:53:16.689767   61110 out.go:177] 
	W0429 00:53:16.690870   61110 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 00:53:16.690915   61110 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0429 00:53:16.690935   61110 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0429 00:53:16.692535   61110 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-219055 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-219055
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-219055: (2.652502129s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-219055 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-219055 status --format={{.Host}}: exit status 7 (92.806692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-219055 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-219055 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.1888152s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-219055 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-219055 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-219055 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (91.768253ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-219055] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-219055
	    minikube start -p kubernetes-upgrade-219055 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2190552 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-219055 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-219055 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-219055 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.86641608s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-29 00:55:16.699216493 +0000 UTC m=+6487.292077351
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-219055 -n kubernetes-upgrade-219055
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-219055 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-219055 logs -n 25: (2.060292069s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-067605 sudo                 | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                 | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo find            | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo crio            | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-067605                      | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC | 29 Apr 24 00:50 UTC |
	| start   | -p pause-934652 --memory=2048         | pause-934652              | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC | 29 Apr 24 00:52 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-634323             | stopped-upgrade-634323    | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:51 UTC |
	| start   | -p cert-expiration-523983             | cert-expiration-523983    | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:52 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-069355 sudo           | NoKubernetes-069355       | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-069355                | NoKubernetes-069355       | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:51 UTC |
	| start   | -p force-systemd-flag-106262          | force-systemd-flag-106262 | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:52 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-934652                       | pause-934652              | jenkins | v1.33.0 | 29 Apr 24 00:52 UTC | 29 Apr 24 00:53 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-106262 ssh cat     | force-systemd-flag-106262 | jenkins | v1.33.0 | 29 Apr 24 00:52 UTC | 29 Apr 24 00:52 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-106262          | force-systemd-flag-106262 | jenkins | v1.33.0 | 29 Apr 24 00:52 UTC | 29 Apr 24 00:52 UTC |
	| start   | -p cert-options-124477                | cert-options-124477       | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC | 29 Apr 24 00:53 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-219055          | kubernetes-upgrade-219055 | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC | 29 Apr 24 00:53 UTC |
	| start   | -p kubernetes-upgrade-219055          | kubernetes-upgrade-219055 | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC | 29 Apr 24 00:54 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-934652                       | pause-934652              | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC | 29 Apr 24 00:53 UTC |
	| start   | -p old-k8s-version-681355             | old-k8s-version-681355    | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| ssh     | cert-options-124477 ssh               | cert-options-124477       | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC | 29 Apr 24 00:53 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-124477 -- sudo        | cert-options-124477       | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC | 29 Apr 24 00:53 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-124477                | cert-options-124477       | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC | 29 Apr 24 00:53 UTC |
	| start   | -p no-preload-440870                  | no-preload-440870         | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-219055          | kubernetes-upgrade-219055 | jenkins | v1.33.0 | 29 Apr 24 00:54 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-219055          | kubernetes-upgrade-219055 | jenkins | v1.33.0 | 29 Apr 24 00:54 UTC | 29 Apr 24 00:55 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 00:54:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 00:54:05.884409   68628 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:54:05.884670   68628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:54:05.884681   68628 out.go:304] Setting ErrFile to fd 2...
	I0429 00:54:05.884685   68628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:54:05.884853   68628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:54:05.885378   68628 out.go:298] Setting JSON to false
	I0429 00:54:05.886340   68628 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9390,"bootTime":1714342656,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 00:54:05.886402   68628 start.go:139] virtualization: kvm guest
	I0429 00:54:05.888265   68628 out.go:177] * [kubernetes-upgrade-219055] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 00:54:05.890314   68628 out.go:177]   - MINIKUBE_LOCATION=17977
	I0429 00:54:05.890364   68628 notify.go:220] Checking for updates...
	I0429 00:54:05.891781   68628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 00:54:05.893280   68628 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0429 00:54:05.894741   68628 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:54:05.896286   68628 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 00:54:05.897621   68628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 00:54:05.899385   68628 config.go:182] Loaded profile config "kubernetes-upgrade-219055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:54:05.899776   68628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:54:05.899815   68628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:54:05.915472   68628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I0429 00:54:05.915901   68628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:54:05.916483   68628 main.go:141] libmachine: Using API Version  1
	I0429 00:54:05.916507   68628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:54:05.916908   68628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:54:05.917093   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:54:05.917351   68628 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 00:54:05.917693   68628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:54:05.917738   68628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:54:05.932368   68628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0429 00:54:05.932795   68628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:54:05.933236   68628 main.go:141] libmachine: Using API Version  1
	I0429 00:54:05.933266   68628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:54:05.933555   68628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:54:05.933800   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:54:05.967099   68628 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 00:54:05.968334   68628 start.go:297] selected driver: kvm2
	I0429 00:54:05.968376   68628 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-219055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:54:05.968496   68628 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 00:54:05.969135   68628 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:54:05.969207   68628 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 00:54:05.983509   68628 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 00:54:05.983858   68628 cni.go:84] Creating CNI manager for ""
	I0429 00:54:05.983874   68628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 00:54:05.983917   68628 start.go:340] cluster config:
	{Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-219055 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:54:05.984012   68628 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:54:05.985642   68628 out.go:177] * Starting "kubernetes-upgrade-219055" primary control-plane node in "kubernetes-upgrade-219055" cluster
	I0429 00:54:04.679110   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:04.679620   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | unable to find current IP address of domain old-k8s-version-681355 in network mk-old-k8s-version-681355
	I0429 00:54:04.679646   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | I0429 00:54:04.679574   68161 retry.go:31] will retry after 4.167607316s: waiting for machine to come up
	I0429 00:54:08.849201   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:08.849756   67962 main.go:141] libmachine: (old-k8s-version-681355) Found IP for machine: 192.168.39.165
	I0429 00:54:08.849785   67962 main.go:141] libmachine: (old-k8s-version-681355) Reserving static IP address...
	I0429 00:54:08.849801   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has current primary IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:08.850123   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-681355", mac: "52:54:00:ad:4c:88", ip: "192.168.39.165"} in network mk-old-k8s-version-681355
	I0429 00:54:08.925619   67962 main.go:141] libmachine: (old-k8s-version-681355) Reserved static IP address: 192.168.39.165
	I0429 00:54:08.925647   67962 main.go:141] libmachine: (old-k8s-version-681355) Waiting for SSH to be available...
	I0429 00:54:08.925686   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | Getting to WaitForSSH function...
	I0429 00:54:08.928577   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:08.928978   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:08.929021   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:08.929154   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | Using SSH client type: external
	I0429 00:54:08.929182   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/old-k8s-version-681355/id_rsa (-rw-------)
	I0429 00:54:08.929227   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/old-k8s-version-681355/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 00:54:08.929246   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | About to run SSH command:
	I0429 00:54:08.929269   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | exit 0
	I0429 00:54:09.059061   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | SSH cmd err, output: <nil>: 
	I0429 00:54:09.059356   67962 main.go:141] libmachine: (old-k8s-version-681355) KVM machine creation complete!
	I0429 00:54:09.059613   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetConfigRaw
	I0429 00:54:09.060158   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .DriverName
	I0429 00:54:09.060369   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .DriverName
	I0429 00:54:09.060534   67962 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 00:54:09.060556   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetState
	I0429 00:54:09.061827   67962 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 00:54:09.061841   67962 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 00:54:09.061848   67962 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 00:54:09.061855   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHHostname
	I0429 00:54:09.064152   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.064647   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:09.064679   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.064809   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHPort
	I0429 00:54:09.064984   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:09.065165   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:09.065319   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHUsername
	I0429 00:54:09.065495   67962 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:09.065769   67962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0429 00:54:09.065797   67962 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 00:54:09.178103   67962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:54:09.178132   67962 main.go:141] libmachine: Detecting the provisioner...
	I0429 00:54:09.178143   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHHostname
	I0429 00:54:09.181131   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.181540   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:09.181566   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.181734   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHPort
	I0429 00:54:09.181914   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:09.182050   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:09.182215   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHUsername
	I0429 00:54:09.182403   67962 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:09.182612   67962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0429 00:54:09.182628   67962 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 00:54:09.296299   67962 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 00:54:09.296386   67962 main.go:141] libmachine: found compatible host: buildroot
	I0429 00:54:09.296400   67962 main.go:141] libmachine: Provisioning with buildroot...
	I0429 00:54:09.296416   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetMachineName
	I0429 00:54:09.296705   67962 buildroot.go:166] provisioning hostname "old-k8s-version-681355"
	I0429 00:54:09.296735   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetMachineName
	I0429 00:54:09.296926   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHHostname
	I0429 00:54:09.299632   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.299979   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:09.300007   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.300160   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHPort
	I0429 00:54:09.300344   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:09.300525   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:09.300650   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHUsername
	I0429 00:54:09.300790   67962 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:09.300955   67962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0429 00:54:09.300972   67962 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-681355 && echo "old-k8s-version-681355" | sudo tee /etc/hostname
	I0429 00:54:09.427788   67962 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-681355
	
	I0429 00:54:09.427831   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHHostname
	I0429 00:54:09.430718   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.431053   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:09.431083   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.431296   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHPort
	I0429 00:54:09.431534   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:09.431728   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:09.431926   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHUsername
	I0429 00:54:09.432067   67962 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:09.432275   67962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0429 00:54:09.432294   67962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-681355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-681355/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-681355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 00:54:09.557769   67962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:54:09.557803   67962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0429 00:54:09.557854   67962 buildroot.go:174] setting up certificates
	I0429 00:54:09.557878   67962 provision.go:84] configureAuth start
	I0429 00:54:09.557891   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetMachineName
	I0429 00:54:09.558198   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetIP
	I0429 00:54:09.560904   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.561171   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:09.561198   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.561385   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHHostname
	I0429 00:54:09.563426   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.563724   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:09.563752   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.563867   67962 provision.go:143] copyHostCerts
	I0429 00:54:09.563923   67962 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0429 00:54:09.563935   67962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:54:09.564001   67962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0429 00:54:09.564128   67962 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0429 00:54:09.564141   67962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:54:09.564172   67962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0429 00:54:09.564249   67962 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0429 00:54:09.564289   67962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:54:09.564333   67962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0429 00:54:09.564404   67962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-681355 san=[127.0.0.1 192.168.39.165 localhost minikube old-k8s-version-681355]
	I0429 00:54:10.424089   68399 start.go:364] duration metric: took 22.103666971s to acquireMachinesLock for "no-preload-440870"
	I0429 00:54:10.424157   68399 start.go:93] Provisioning new machine with config: &{Name:no-preload-440870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:no-preload-440870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 00:54:10.424290   68399 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 00:54:05.986710   68628 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:54:05.986747   68628 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 00:54:05.986755   68628 cache.go:56] Caching tarball of preloaded images
	I0429 00:54:05.986840   68628 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 00:54:05.986854   68628 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 00:54:05.986963   68628 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/config.json ...
	I0429 00:54:05.987179   68628 start.go:360] acquireMachinesLock for kubernetes-upgrade-219055: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 00:54:09.678109   67962 provision.go:177] copyRemoteCerts
	I0429 00:54:09.678411   67962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 00:54:09.678439   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHHostname
	I0429 00:54:09.680861   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.681207   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:09.681244   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.681432   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHPort
	I0429 00:54:09.681619   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:09.681763   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHUsername
	I0429 00:54:09.681931   67962 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/old-k8s-version-681355/id_rsa Username:docker}
	I0429 00:54:09.769407   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 00:54:09.798608   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 00:54:09.827731   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 00:54:09.857239   67962 provision.go:87] duration metric: took 299.325152ms to configureAuth
	I0429 00:54:09.857266   67962 buildroot.go:189] setting minikube options for container-runtime
	I0429 00:54:09.857448   67962 config.go:182] Loaded profile config "old-k8s-version-681355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 00:54:09.857539   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHHostname
	I0429 00:54:09.860117   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.860513   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:09.860547   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:09.860676   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHPort
	I0429 00:54:09.860895   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:09.861073   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:09.861275   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHUsername
	I0429 00:54:09.861431   67962 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:09.861642   67962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0429 00:54:09.861666   67962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 00:54:10.154896   67962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 00:54:10.154920   67962 main.go:141] libmachine: Checking connection to Docker...
	I0429 00:54:10.154930   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetURL
	I0429 00:54:10.156306   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | Using libvirt version 6000000
	I0429 00:54:10.158565   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.158936   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:10.158961   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.159064   67962 main.go:141] libmachine: Docker is up and running!
	I0429 00:54:10.159076   67962 main.go:141] libmachine: Reticulating splines...
	I0429 00:54:10.159083   67962 client.go:171] duration metric: took 24.271675054s to LocalClient.Create
	I0429 00:54:10.159108   67962 start.go:167] duration metric: took 24.271730001s to libmachine.API.Create "old-k8s-version-681355"
	I0429 00:54:10.159121   67962 start.go:293] postStartSetup for "old-k8s-version-681355" (driver="kvm2")
	I0429 00:54:10.159138   67962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 00:54:10.159166   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .DriverName
	I0429 00:54:10.159428   67962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 00:54:10.159453   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHHostname
	I0429 00:54:10.161392   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.161641   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:10.161671   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.161807   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHPort
	I0429 00:54:10.162002   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:10.162165   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHUsername
	I0429 00:54:10.162295   67962 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/old-k8s-version-681355/id_rsa Username:docker}
	I0429 00:54:10.250168   67962 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 00:54:10.255368   67962 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 00:54:10.255399   67962 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0429 00:54:10.255470   67962 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0429 00:54:10.255581   67962 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0429 00:54:10.255677   67962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 00:54:10.268637   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:54:10.299872   67962 start.go:296] duration metric: took 140.734238ms for postStartSetup
	I0429 00:54:10.299924   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetConfigRaw
	I0429 00:54:10.300536   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetIP
	I0429 00:54:10.303339   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.303811   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:10.303842   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.304106   67962 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/config.json ...
	I0429 00:54:10.304335   67962 start.go:128] duration metric: took 24.43991484s to createHost
	I0429 00:54:10.304366   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHHostname
	I0429 00:54:10.306575   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.306920   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:10.306947   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.307071   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHPort
	I0429 00:54:10.307270   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:10.307473   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:10.307655   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHUsername
	I0429 00:54:10.307829   67962 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:10.308040   67962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0429 00:54:10.308060   67962 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 00:54:10.423895   67962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714352050.401410967
	
	I0429 00:54:10.423922   67962 fix.go:216] guest clock: 1714352050.401410967
	I0429 00:54:10.423932   67962 fix.go:229] Guest: 2024-04-29 00:54:10.401410967 +0000 UTC Remote: 2024-04-29 00:54:10.304350888 +0000 UTC m=+45.677374071 (delta=97.060079ms)
	I0429 00:54:10.423958   67962 fix.go:200] guest clock delta is within tolerance: 97.060079ms
	I0429 00:54:10.423966   67962 start.go:83] releasing machines lock for "old-k8s-version-681355", held for 24.559717423s
	I0429 00:54:10.424014   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .DriverName
	I0429 00:54:10.424331   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetIP
	I0429 00:54:10.427348   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.427807   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:10.427835   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.428010   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .DriverName
	I0429 00:54:10.428504   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .DriverName
	I0429 00:54:10.428713   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .DriverName
	I0429 00:54:10.428791   67962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 00:54:10.428833   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHHostname
	I0429 00:54:10.428939   67962 ssh_runner.go:195] Run: cat /version.json
	I0429 00:54:10.428964   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHHostname
	I0429 00:54:10.431462   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.431786   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.431832   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:10.431850   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.432091   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHPort
	I0429 00:54:10.432128   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:10.432170   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:10.432415   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHPort
	I0429 00:54:10.432418   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:10.432603   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHUsername
	I0429 00:54:10.432651   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHKeyPath
	I0429 00:54:10.432777   67962 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/old-k8s-version-681355/id_rsa Username:docker}
	I0429 00:54:10.432883   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetSSHUsername
	I0429 00:54:10.433027   67962 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/old-k8s-version-681355/id_rsa Username:docker}
	I0429 00:54:10.516763   67962 ssh_runner.go:195] Run: systemctl --version
	I0429 00:54:10.541451   67962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 00:54:10.715897   67962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 00:54:10.724614   67962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 00:54:10.724694   67962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 00:54:10.745023   67962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 00:54:10.745051   67962 start.go:494] detecting cgroup driver to use...
	I0429 00:54:10.745120   67962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 00:54:10.765987   67962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 00:54:10.784452   67962 docker.go:217] disabling cri-docker service (if available) ...
	I0429 00:54:10.784518   67962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 00:54:10.799638   67962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 00:54:10.816337   67962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 00:54:10.970354   67962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 00:54:11.139321   67962 docker.go:233] disabling docker service ...
	I0429 00:54:11.139405   67962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 00:54:11.156941   67962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 00:54:11.172968   67962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 00:54:11.330125   67962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 00:54:11.489508   67962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 00:54:11.507436   67962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 00:54:11.532235   67962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 00:54:11.532306   67962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:11.546838   67962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 00:54:11.546921   67962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:11.561951   67962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:11.582703   67962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:11.597068   67962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 00:54:11.611908   67962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 00:54:11.624663   67962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 00:54:11.624725   67962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 00:54:11.641979   67962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 00:54:11.652911   67962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:54:11.795338   67962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 00:54:11.996140   67962 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 00:54:11.996206   67962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 00:54:12.002413   67962 start.go:562] Will wait 60s for crictl version
	I0429 00:54:12.002476   67962 ssh_runner.go:195] Run: which crictl
	I0429 00:54:12.007861   67962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 00:54:12.053736   67962 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 00:54:12.053821   67962 ssh_runner.go:195] Run: crio --version
	I0429 00:54:12.091904   67962 ssh_runner.go:195] Run: crio --version
	I0429 00:54:12.133414   67962 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 00:54:10.426765   68399 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 00:54:10.427010   68399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:54:10.427060   68399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:54:10.447215   68399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41563
	I0429 00:54:10.447613   68399 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:54:10.448189   68399 main.go:141] libmachine: Using API Version  1
	I0429 00:54:10.448209   68399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:54:10.448527   68399 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:54:10.448715   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetMachineName
	I0429 00:54:10.448857   68399 main.go:141] libmachine: (no-preload-440870) Calling .DriverName
	I0429 00:54:10.448993   68399 start.go:159] libmachine.API.Create for "no-preload-440870" (driver="kvm2")
	I0429 00:54:10.449028   68399 client.go:168] LocalClient.Create starting
	I0429 00:54:10.449064   68399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem
	I0429 00:54:10.449103   68399 main.go:141] libmachine: Decoding PEM data...
	I0429 00:54:10.449132   68399 main.go:141] libmachine: Parsing certificate...
	I0429 00:54:10.449204   68399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem
	I0429 00:54:10.449236   68399 main.go:141] libmachine: Decoding PEM data...
	I0429 00:54:10.449253   68399 main.go:141] libmachine: Parsing certificate...
	I0429 00:54:10.449279   68399 main.go:141] libmachine: Running pre-create checks...
	I0429 00:54:10.449291   68399 main.go:141] libmachine: (no-preload-440870) Calling .PreCreateCheck
	I0429 00:54:10.449612   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetConfigRaw
	I0429 00:54:10.450039   68399 main.go:141] libmachine: Creating machine...
	I0429 00:54:10.450057   68399 main.go:141] libmachine: (no-preload-440870) Calling .Create
	I0429 00:54:10.450187   68399 main.go:141] libmachine: (no-preload-440870) Creating KVM machine...
	I0429 00:54:10.451393   68399 main.go:141] libmachine: (no-preload-440870) DBG | found existing default KVM network
	I0429 00:54:10.452842   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:10.452673   68705 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:47:54:e5} reservation:<nil>}
	I0429 00:54:10.453795   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:10.453710   68705 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:49:91:3b} reservation:<nil>}
	I0429 00:54:10.454621   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:10.454522   68705 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:c0:0c:fa} reservation:<nil>}
	I0429 00:54:10.455623   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:10.455542   68705 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a3890}
	I0429 00:54:10.455642   68399 main.go:141] libmachine: (no-preload-440870) DBG | created network xml: 
	I0429 00:54:10.455667   68399 main.go:141] libmachine: (no-preload-440870) DBG | <network>
	I0429 00:54:10.455685   68399 main.go:141] libmachine: (no-preload-440870) DBG |   <name>mk-no-preload-440870</name>
	I0429 00:54:10.455695   68399 main.go:141] libmachine: (no-preload-440870) DBG |   <dns enable='no'/>
	I0429 00:54:10.455707   68399 main.go:141] libmachine: (no-preload-440870) DBG |   
	I0429 00:54:10.455719   68399 main.go:141] libmachine: (no-preload-440870) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0429 00:54:10.455731   68399 main.go:141] libmachine: (no-preload-440870) DBG |     <dhcp>
	I0429 00:54:10.455742   68399 main.go:141] libmachine: (no-preload-440870) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0429 00:54:10.455750   68399 main.go:141] libmachine: (no-preload-440870) DBG |     </dhcp>
	I0429 00:54:10.455755   68399 main.go:141] libmachine: (no-preload-440870) DBG |   </ip>
	I0429 00:54:10.455762   68399 main.go:141] libmachine: (no-preload-440870) DBG |   
	I0429 00:54:10.455768   68399 main.go:141] libmachine: (no-preload-440870) DBG | </network>
	I0429 00:54:10.455772   68399 main.go:141] libmachine: (no-preload-440870) DBG | 
	I0429 00:54:10.461385   68399 main.go:141] libmachine: (no-preload-440870) DBG | trying to create private KVM network mk-no-preload-440870 192.168.72.0/24...
	I0429 00:54:10.536056   68399 main.go:141] libmachine: (no-preload-440870) DBG | private KVM network mk-no-preload-440870 192.168.72.0/24 created
	I0429 00:54:10.536086   68399 main.go:141] libmachine: (no-preload-440870) Setting up store path in /home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870 ...
	I0429 00:54:10.536100   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:10.536013   68705 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:54:10.536116   68399 main.go:141] libmachine: (no-preload-440870) Building disk image from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 00:54:10.536159   68399 main.go:141] libmachine: (no-preload-440870) Downloading /home/jenkins/minikube-integration/17977-13393/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 00:54:10.778076   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:10.777923   68705 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870/id_rsa...
	I0429 00:54:10.908953   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:10.908830   68705 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870/no-preload-440870.rawdisk...
	I0429 00:54:10.908984   68399 main.go:141] libmachine: (no-preload-440870) DBG | Writing magic tar header
	I0429 00:54:10.909007   68399 main.go:141] libmachine: (no-preload-440870) DBG | Writing SSH key tar header
	I0429 00:54:10.909078   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:10.909004   68705 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870 ...
	I0429 00:54:10.909189   68399 main.go:141] libmachine: (no-preload-440870) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870 (perms=drwx------)
	I0429 00:54:10.909209   68399 main.go:141] libmachine: (no-preload-440870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870
	I0429 00:54:10.909220   68399 main.go:141] libmachine: (no-preload-440870) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube/machines (perms=drwxr-xr-x)
	I0429 00:54:10.909237   68399 main.go:141] libmachine: (no-preload-440870) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393/.minikube (perms=drwxr-xr-x)
	I0429 00:54:10.909250   68399 main.go:141] libmachine: (no-preload-440870) Setting executable bit set on /home/jenkins/minikube-integration/17977-13393 (perms=drwxrwxr-x)
	I0429 00:54:10.909312   68399 main.go:141] libmachine: (no-preload-440870) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 00:54:10.909367   68399 main.go:141] libmachine: (no-preload-440870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube/machines
	I0429 00:54:10.909384   68399 main.go:141] libmachine: (no-preload-440870) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 00:54:10.909402   68399 main.go:141] libmachine: (no-preload-440870) Creating domain...
	I0429 00:54:10.909431   68399 main.go:141] libmachine: (no-preload-440870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:54:10.909445   68399 main.go:141] libmachine: (no-preload-440870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17977-13393
	I0429 00:54:10.909458   68399 main.go:141] libmachine: (no-preload-440870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 00:54:10.909470   68399 main.go:141] libmachine: (no-preload-440870) DBG | Checking permissions on dir: /home/jenkins
	I0429 00:54:10.909479   68399 main.go:141] libmachine: (no-preload-440870) DBG | Checking permissions on dir: /home
	I0429 00:54:10.909490   68399 main.go:141] libmachine: (no-preload-440870) DBG | Skipping /home - not owner
	I0429 00:54:10.910748   68399 main.go:141] libmachine: (no-preload-440870) define libvirt domain using xml: 
	I0429 00:54:10.910775   68399 main.go:141] libmachine: (no-preload-440870) <domain type='kvm'>
	I0429 00:54:10.910786   68399 main.go:141] libmachine: (no-preload-440870)   <name>no-preload-440870</name>
	I0429 00:54:10.910794   68399 main.go:141] libmachine: (no-preload-440870)   <memory unit='MiB'>2200</memory>
	I0429 00:54:10.910804   68399 main.go:141] libmachine: (no-preload-440870)   <vcpu>2</vcpu>
	I0429 00:54:10.910811   68399 main.go:141] libmachine: (no-preload-440870)   <features>
	I0429 00:54:10.910840   68399 main.go:141] libmachine: (no-preload-440870)     <acpi/>
	I0429 00:54:10.910854   68399 main.go:141] libmachine: (no-preload-440870)     <apic/>
	I0429 00:54:10.910863   68399 main.go:141] libmachine: (no-preload-440870)     <pae/>
	I0429 00:54:10.910869   68399 main.go:141] libmachine: (no-preload-440870)     
	I0429 00:54:10.910878   68399 main.go:141] libmachine: (no-preload-440870)   </features>
	I0429 00:54:10.910886   68399 main.go:141] libmachine: (no-preload-440870)   <cpu mode='host-passthrough'>
	I0429 00:54:10.910893   68399 main.go:141] libmachine: (no-preload-440870)   
	I0429 00:54:10.910899   68399 main.go:141] libmachine: (no-preload-440870)   </cpu>
	I0429 00:54:10.910907   68399 main.go:141] libmachine: (no-preload-440870)   <os>
	I0429 00:54:10.910919   68399 main.go:141] libmachine: (no-preload-440870)     <type>hvm</type>
	I0429 00:54:10.910927   68399 main.go:141] libmachine: (no-preload-440870)     <boot dev='cdrom'/>
	I0429 00:54:10.910934   68399 main.go:141] libmachine: (no-preload-440870)     <boot dev='hd'/>
	I0429 00:54:10.910942   68399 main.go:141] libmachine: (no-preload-440870)     <bootmenu enable='no'/>
	I0429 00:54:10.910948   68399 main.go:141] libmachine: (no-preload-440870)   </os>
	I0429 00:54:10.910956   68399 main.go:141] libmachine: (no-preload-440870)   <devices>
	I0429 00:54:10.910964   68399 main.go:141] libmachine: (no-preload-440870)     <disk type='file' device='cdrom'>
	I0429 00:54:10.910977   68399 main.go:141] libmachine: (no-preload-440870)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870/boot2docker.iso'/>
	I0429 00:54:10.910988   68399 main.go:141] libmachine: (no-preload-440870)       <target dev='hdc' bus='scsi'/>
	I0429 00:54:10.910996   68399 main.go:141] libmachine: (no-preload-440870)       <readonly/>
	I0429 00:54:10.911002   68399 main.go:141] libmachine: (no-preload-440870)     </disk>
	I0429 00:54:10.911011   68399 main.go:141] libmachine: (no-preload-440870)     <disk type='file' device='disk'>
	I0429 00:54:10.911021   68399 main.go:141] libmachine: (no-preload-440870)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 00:54:10.911033   68399 main.go:141] libmachine: (no-preload-440870)       <source file='/home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870/no-preload-440870.rawdisk'/>
	I0429 00:54:10.911040   68399 main.go:141] libmachine: (no-preload-440870)       <target dev='hda' bus='virtio'/>
	I0429 00:54:10.911048   68399 main.go:141] libmachine: (no-preload-440870)     </disk>
	I0429 00:54:10.911055   68399 main.go:141] libmachine: (no-preload-440870)     <interface type='network'>
	I0429 00:54:10.911064   68399 main.go:141] libmachine: (no-preload-440870)       <source network='mk-no-preload-440870'/>
	I0429 00:54:10.911077   68399 main.go:141] libmachine: (no-preload-440870)       <model type='virtio'/>
	I0429 00:54:10.911085   68399 main.go:141] libmachine: (no-preload-440870)     </interface>
	I0429 00:54:10.911092   68399 main.go:141] libmachine: (no-preload-440870)     <interface type='network'>
	I0429 00:54:10.911101   68399 main.go:141] libmachine: (no-preload-440870)       <source network='default'/>
	I0429 00:54:10.911108   68399 main.go:141] libmachine: (no-preload-440870)       <model type='virtio'/>
	I0429 00:54:10.911115   68399 main.go:141] libmachine: (no-preload-440870)     </interface>
	I0429 00:54:10.911128   68399 main.go:141] libmachine: (no-preload-440870)     <serial type='pty'>
	I0429 00:54:10.911138   68399 main.go:141] libmachine: (no-preload-440870)       <target port='0'/>
	I0429 00:54:10.911145   68399 main.go:141] libmachine: (no-preload-440870)     </serial>
	I0429 00:54:10.911153   68399 main.go:141] libmachine: (no-preload-440870)     <console type='pty'>
	I0429 00:54:10.911159   68399 main.go:141] libmachine: (no-preload-440870)       <target type='serial' port='0'/>
	I0429 00:54:10.911167   68399 main.go:141] libmachine: (no-preload-440870)     </console>
	I0429 00:54:10.911173   68399 main.go:141] libmachine: (no-preload-440870)     <rng model='virtio'>
	I0429 00:54:10.911182   68399 main.go:141] libmachine: (no-preload-440870)       <backend model='random'>/dev/random</backend>
	I0429 00:54:10.911198   68399 main.go:141] libmachine: (no-preload-440870)     </rng>
	I0429 00:54:10.911205   68399 main.go:141] libmachine: (no-preload-440870)     
	I0429 00:54:10.911211   68399 main.go:141] libmachine: (no-preload-440870)     
	I0429 00:54:10.911223   68399 main.go:141] libmachine: (no-preload-440870)   </devices>
	I0429 00:54:10.911230   68399 main.go:141] libmachine: (no-preload-440870) </domain>
	I0429 00:54:10.911239   68399 main.go:141] libmachine: (no-preload-440870) 
	I0429 00:54:10.916485   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:e2:fb:b3 in network default
	I0429 00:54:10.917278   68399 main.go:141] libmachine: (no-preload-440870) Ensuring networks are active...
	I0429 00:54:10.917303   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:10.918220   68399 main.go:141] libmachine: (no-preload-440870) Ensuring network default is active
	I0429 00:54:10.918604   68399 main.go:141] libmachine: (no-preload-440870) Ensuring network mk-no-preload-440870 is active
	I0429 00:54:10.919190   68399 main.go:141] libmachine: (no-preload-440870) Getting domain xml...
	I0429 00:54:10.920066   68399 main.go:141] libmachine: (no-preload-440870) Creating domain...
	I0429 00:54:12.245430   68399 main.go:141] libmachine: (no-preload-440870) Waiting to get IP...
	I0429 00:54:12.246511   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:12.247077   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:12.247147   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:12.247072   68705 retry.go:31] will retry after 269.668367ms: waiting for machine to come up
	I0429 00:54:12.518781   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:12.519364   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:12.519391   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:12.519332   68705 retry.go:31] will retry after 369.857921ms: waiting for machine to come up
	I0429 00:54:12.891096   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:12.891781   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:12.891812   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:12.891734   68705 retry.go:31] will retry after 471.054468ms: waiting for machine to come up
	I0429 00:54:12.134761   67962 main.go:141] libmachine: (old-k8s-version-681355) Calling .GetIP
	I0429 00:54:12.137895   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:12.138283   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4c:88", ip: ""} in network mk-old-k8s-version-681355: {Iface:virbr3 ExpiryTime:2024-04-29 01:54:03 +0000 UTC Type:0 Mac:52:54:00:ad:4c:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:old-k8s-version-681355 Clientid:01:52:54:00:ad:4c:88}
	I0429 00:54:12.138307   67962 main.go:141] libmachine: (old-k8s-version-681355) DBG | domain old-k8s-version-681355 has defined IP address 192.168.39.165 and MAC address 52:54:00:ad:4c:88 in network mk-old-k8s-version-681355
	I0429 00:54:12.138555   67962 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 00:54:12.146598   67962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 00:54:12.167884   67962 kubeadm.go:877] updating cluster {Name:old-k8s-version-681355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-681355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 00:54:12.168023   67962 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 00:54:12.168084   67962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:54:12.220602   67962 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 00:54:12.220660   67962 ssh_runner.go:195] Run: which lz4
	I0429 00:54:12.225681   67962 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 00:54:12.230802   67962 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 00:54:12.230834   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 00:54:14.476899   67962 crio.go:462] duration metric: took 2.251248173s to copy over tarball
	I0429 00:54:14.477001   67962 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 00:54:13.364268   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:13.364871   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:13.364904   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:13.364774   68705 retry.go:31] will retry after 371.217312ms: waiting for machine to come up
	I0429 00:54:13.737211   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:13.737826   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:13.737853   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:13.737735   68705 retry.go:31] will retry after 713.096814ms: waiting for machine to come up
	I0429 00:54:14.451959   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:14.452487   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:14.452542   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:14.452452   68705 retry.go:31] will retry after 573.925157ms: waiting for machine to come up
	I0429 00:54:15.028479   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:15.029053   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:15.029081   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:15.029002   68705 retry.go:31] will retry after 836.007513ms: waiting for machine to come up
	I0429 00:54:15.866588   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:15.867452   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:15.867480   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:15.867405   68705 retry.go:31] will retry after 980.595743ms: waiting for machine to come up
	I0429 00:54:16.849406   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:16.850018   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:16.850059   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:16.849962   68705 retry.go:31] will retry after 1.618431193s: waiting for machine to come up
	I0429 00:54:17.826523   67962 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.349485493s)
	I0429 00:54:17.826558   67962 crio.go:469] duration metric: took 3.349620822s to extract the tarball
	I0429 00:54:17.826569   67962 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 00:54:17.890834   67962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:54:17.952098   67962 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 00:54:17.952127   67962 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 00:54:17.952196   67962 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 00:54:17.952216   67962 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 00:54:17.952238   67962 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 00:54:17.952255   67962 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 00:54:17.952292   67962 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 00:54:17.952472   67962 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 00:54:17.952485   67962 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 00:54:17.952473   67962 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 00:54:17.953776   67962 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 00:54:17.953960   67962 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 00:54:17.954086   67962 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 00:54:17.954238   67962 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 00:54:17.954317   67962 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 00:54:17.954388   67962 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 00:54:17.954949   67962 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 00:54:17.955057   67962 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 00:54:18.110831   67962 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 00:54:18.163104   67962 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 00:54:18.163152   67962 cri.go:232] Removing image: registry.k8s.io/pause:3.2
	I0429 00:54:18.163208   67962 ssh_runner.go:195] Run: which crictl
	I0429 00:54:18.168558   67962 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 00:54:18.190299   67962 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 00:54:18.212898   67962 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 00:54:18.251215   67962 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 00:54:18.251269   67962 cri.go:232] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 00:54:18.251330   67962 ssh_runner.go:195] Run: which crictl
	I0429 00:54:18.256347   67962 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 00:54:18.295929   67962 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 00:54:18.298520   67962 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 00:54:18.302163   67962 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 00:54:18.303927   67962 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 00:54:18.313876   67962 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 00:54:18.324255   67962 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 00:54:18.470467   67962 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 00:54:18.470512   67962 cri.go:232] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 00:54:18.470529   67962 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 00:54:18.470563   67962 cri.go:232] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 00:54:18.470569   67962 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 00:54:18.470579   67962 ssh_runner.go:195] Run: which crictl
	I0429 00:54:18.470589   67962 cri.go:232] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 00:54:18.470607   67962 ssh_runner.go:195] Run: which crictl
	I0429 00:54:18.470616   67962 ssh_runner.go:195] Run: which crictl
	I0429 00:54:18.470623   67962 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 00:54:18.470650   67962 cri.go:232] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 00:54:18.470653   67962 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 00:54:18.470675   67962 cri.go:232] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 00:54:18.470703   67962 ssh_runner.go:195] Run: which crictl
	I0429 00:54:18.470708   67962 ssh_runner.go:195] Run: which crictl
	I0429 00:54:18.483456   67962 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 00:54:18.483524   67962 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 00:54:18.484176   67962 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 00:54:18.484295   67962 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 00:54:18.485151   67962 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 00:54:18.591078   67962 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 00:54:18.609705   67962 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 00:54:18.617140   67962 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 00:54:18.617169   67962 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 00:54:18.617262   67962 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 00:54:18.894517   67962 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 00:54:19.044371   67962 cache_images.go:92] duration metric: took 1.092225395s to LoadCachedImages
	W0429 00:54:19.044462   67962 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0429 00:54:19.044479   67962 kubeadm.go:928] updating node { 192.168.39.165 8443 v1.20.0 crio true true} ...
	I0429 00:54:19.044626   67962 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-681355 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-681355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 00:54:19.044720   67962 ssh_runner.go:195] Run: crio config
	I0429 00:54:19.104042   67962 cni.go:84] Creating CNI manager for ""
	I0429 00:54:19.104066   67962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 00:54:19.104077   67962 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 00:54:19.104094   67962 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-681355 NodeName:old-k8s-version-681355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 00:54:19.104277   67962 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-681355"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 00:54:19.104358   67962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 00:54:19.114885   67962 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 00:54:19.114953   67962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 00:54:19.125817   67962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0429 00:54:19.148814   67962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 00:54:19.171148   67962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0429 00:54:19.194489   67962 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I0429 00:54:19.199900   67962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 00:54:19.213593   67962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:54:19.382782   67962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 00:54:19.405245   67962 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355 for IP: 192.168.39.165
	I0429 00:54:19.405271   67962 certs.go:194] generating shared ca certs ...
	I0429 00:54:19.405290   67962 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:54:19.405498   67962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0429 00:54:19.405555   67962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0429 00:54:19.405568   67962 certs.go:256] generating profile certs ...
	I0429 00:54:19.405677   67962 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/client.key
	I0429 00:54:19.405696   67962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/client.crt with IP's: []
	I0429 00:54:19.661027   67962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/client.crt ...
	I0429 00:54:19.661059   67962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/client.crt: {Name:mk08d683cfb02b14701db7892a536dc2a29e1e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:54:19.661276   67962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/client.key ...
	I0429 00:54:19.661295   67962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/client.key: {Name:mk1572a21e54d3fb301cb46505fdf32be1487d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:54:19.661408   67962 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.key.2c679f7c
	I0429 00:54:19.661428   67962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.crt.2c679f7c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165]
	I0429 00:54:19.805413   67962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.crt.2c679f7c ...
	I0429 00:54:19.805441   67962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.crt.2c679f7c: {Name:mk1118eeab754185fdd69fd450f754c22c973358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:54:19.869193   67962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.key.2c679f7c ...
	I0429 00:54:19.869242   67962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.key.2c679f7c: {Name:mke695ee4ee09160c24eb61d536b8d45e1128fa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:54:19.869452   67962 certs.go:381] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.crt.2c679f7c -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.crt
	I0429 00:54:19.869575   67962 certs.go:385] copying /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.key.2c679f7c -> /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.key
	I0429 00:54:19.869653   67962 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/proxy-client.key
	I0429 00:54:19.869674   67962 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/proxy-client.crt with IP's: []
	I0429 00:54:19.921992   67962 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/proxy-client.crt ...
	I0429 00:54:19.922026   67962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/proxy-client.crt: {Name:mk80d8ec8eff5b03d6be55df915b9474b3dd3667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:54:19.950757   67962 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/proxy-client.key ...
	I0429 00:54:19.950795   67962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/proxy-client.key: {Name:mk6fd93bc5c19c7feec6a6f4804a285609ec597c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:54:19.951075   67962 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0429 00:54:19.951130   67962 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0429 00:54:19.951146   67962 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 00:54:19.951173   67962 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0429 00:54:19.951211   67962 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0429 00:54:19.951240   67962 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0429 00:54:19.951302   67962 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:54:19.952171   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 00:54:19.984577   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 00:54:20.013746   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 00:54:20.043169   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 00:54:20.073167   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 00:54:20.101776   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 00:54:20.131400   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 00:54:20.161855   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/old-k8s-version-681355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 00:54:20.192202   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 00:54:20.223051   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0429 00:54:20.266311   67962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0429 00:54:20.297199   67962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 00:54:20.316551   67962 ssh_runner.go:195] Run: openssl version
	I0429 00:54:20.323978   67962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 00:54:20.339800   67962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:54:20.345139   67962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:54:20.345189   67962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:54:20.352308   67962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 00:54:20.370834   67962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0429 00:54:20.382809   67962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0429 00:54:20.387861   67962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0429 00:54:20.387929   67962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0429 00:54:20.395598   67962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0429 00:54:20.410000   67962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0429 00:54:20.423341   67962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0429 00:54:20.428890   67962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0429 00:54:20.428937   67962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0429 00:54:20.435968   67962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 00:54:20.449500   67962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 00:54:20.454180   67962 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 00:54:20.454237   67962 kubeadm.go:391] StartCluster: {Name:old-k8s-version-681355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-681355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:54:20.454309   67962 cri.go:56] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 00:54:20.454366   67962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 00:54:20.501917   67962 cri.go:91] found id: ""
	I0429 00:54:20.501998   67962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 00:54:20.513281   67962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 00:54:20.524722   67962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 00:54:20.537040   67962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 00:54:20.537108   67962 kubeadm.go:156] found existing configuration files:
	
	I0429 00:54:20.537162   67962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 00:54:20.547528   67962 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 00:54:20.547612   67962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 00:54:20.558093   67962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 00:54:20.568530   67962 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 00:54:20.568599   67962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 00:54:20.579879   67962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 00:54:20.591017   67962 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 00:54:20.591095   67962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 00:54:20.602119   67962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 00:54:20.612028   67962 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 00:54:20.612082   67962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 00:54:20.623071   67962 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 00:54:20.760426   67962 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 00:54:20.760541   67962 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 00:54:20.937563   67962 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 00:54:20.937698   67962 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 00:54:20.937814   67962 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 00:54:21.158564   67962 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 00:54:18.469849   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:18.470358   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:18.470382   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:18.470319   68705 retry.go:31] will retry after 1.927003119s: waiting for machine to come up
	I0429 00:54:20.399245   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:20.399864   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:20.399896   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:20.399812   68705 retry.go:31] will retry after 2.294277875s: waiting for machine to come up
	I0429 00:54:22.696272   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:22.696782   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:22.696811   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:22.696741   68705 retry.go:31] will retry after 3.355324999s: waiting for machine to come up
	I0429 00:54:21.161285   67962 out.go:204]   - Generating certificates and keys ...
	I0429 00:54:21.161408   67962 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 00:54:21.161517   67962 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 00:54:21.356911   67962 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 00:54:21.718478   67962 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 00:54:21.833911   67962 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 00:54:21.968918   67962 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 00:54:22.102654   67962 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 00:54:22.102932   67962 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-681355] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0429 00:54:22.157461   67962 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 00:54:22.157790   67962 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-681355] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0429 00:54:22.530403   67962 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 00:54:22.611165   67962 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 00:54:22.913219   67962 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 00:54:22.913579   67962 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 00:54:23.079843   67962 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 00:54:23.248190   67962 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 00:54:23.597473   67962 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 00:54:23.777230   67962 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 00:54:23.798451   67962 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 00:54:23.799825   67962 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 00:54:23.799985   67962 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 00:54:23.966946   67962 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 00:54:23.969828   67962 out.go:204]   - Booting up control plane ...
	I0429 00:54:23.969988   67962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 00:54:23.981533   67962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 00:54:23.982862   67962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 00:54:23.985986   67962 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 00:54:23.996981   67962 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 00:54:26.054208   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:26.054719   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:26.054742   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:26.054653   68705 retry.go:31] will retry after 3.44033595s: waiting for machine to come up
	I0429 00:54:29.499007   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:29.499521   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find current IP address of domain no-preload-440870 in network mk-no-preload-440870
	I0429 00:54:29.499563   68399 main.go:141] libmachine: (no-preload-440870) DBG | I0429 00:54:29.499491   68705 retry.go:31] will retry after 4.505308336s: waiting for machine to come up
	I0429 00:54:35.707898   68628 start.go:364] duration metric: took 29.720687724s to acquireMachinesLock for "kubernetes-upgrade-219055"
	I0429 00:54:35.707952   68628 start.go:96] Skipping create...Using existing machine configuration
	I0429 00:54:35.707961   68628 fix.go:54] fixHost starting: 
	I0429 00:54:35.708385   68628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:54:35.708434   68628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:54:35.727933   68628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0429 00:54:35.728319   68628 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:54:35.728885   68628 main.go:141] libmachine: Using API Version  1
	I0429 00:54:35.728904   68628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:54:35.729211   68628 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:54:35.729376   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:54:35.729509   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetState
	I0429 00:54:35.730989   68628 fix.go:112] recreateIfNeeded on kubernetes-upgrade-219055: state=Running err=<nil>
	W0429 00:54:35.731010   68628 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 00:54:35.733378   68628 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-219055" VM ...
	I0429 00:54:35.734833   68628 machine.go:94] provisionDockerMachine start ...
	I0429 00:54:35.734857   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:54:35.735065   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:54:35.737396   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:35.737861   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:35.737895   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:35.737967   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:54:35.738157   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:35.738312   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:35.738433   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:54:35.738605   68628 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:35.738821   68628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I0429 00:54:35.738834   68628 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 00:54:35.851501   68628 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-219055
	
	I0429 00:54:35.851530   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetMachineName
	I0429 00:54:35.851804   68628 buildroot.go:166] provisioning hostname "kubernetes-upgrade-219055"
	I0429 00:54:35.851839   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetMachineName
	I0429 00:54:35.852028   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:54:35.855242   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:35.855662   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:35.855682   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:35.855900   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:54:35.856116   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:35.856300   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:35.856443   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:54:35.856596   68628 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:35.856765   68628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I0429 00:54:35.856782   68628 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-219055 && echo "kubernetes-upgrade-219055" | sudo tee /etc/hostname
	I0429 00:54:34.008674   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.009134   68399 main.go:141] libmachine: (no-preload-440870) Found IP for machine: 192.168.72.33
	I0429 00:54:34.009171   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has current primary IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.009179   68399 main.go:141] libmachine: (no-preload-440870) Reserving static IP address...
	I0429 00:54:34.009542   68399 main.go:141] libmachine: (no-preload-440870) DBG | unable to find host DHCP lease matching {name: "no-preload-440870", mac: "52:54:00:be:d8:5f", ip: "192.168.72.33"} in network mk-no-preload-440870
	I0429 00:54:34.088010   68399 main.go:141] libmachine: (no-preload-440870) DBG | Getting to WaitForSSH function...
	I0429 00:54:34.088042   68399 main.go:141] libmachine: (no-preload-440870) Reserved static IP address: 192.168.72.33
	I0429 00:54:34.088054   68399 main.go:141] libmachine: (no-preload-440870) Waiting for SSH to be available...
	I0429 00:54:34.091073   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.091502   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:34.091527   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.091720   68399 main.go:141] libmachine: (no-preload-440870) DBG | Using SSH client type: external
	I0429 00:54:34.091755   68399 main.go:141] libmachine: (no-preload-440870) DBG | Using SSH private key: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870/id_rsa (-rw-------)
	I0429 00:54:34.091800   68399 main.go:141] libmachine: (no-preload-440870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 00:54:34.091820   68399 main.go:141] libmachine: (no-preload-440870) DBG | About to run SSH command:
	I0429 00:54:34.091850   68399 main.go:141] libmachine: (no-preload-440870) DBG | exit 0
	I0429 00:54:34.222718   68399 main.go:141] libmachine: (no-preload-440870) DBG | SSH cmd err, output: <nil>: 
	I0429 00:54:34.222959   68399 main.go:141] libmachine: (no-preload-440870) KVM machine creation complete!
	I0429 00:54:34.223317   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetConfigRaw
	I0429 00:54:34.223970   68399 main.go:141] libmachine: (no-preload-440870) Calling .DriverName
	I0429 00:54:34.224196   68399 main.go:141] libmachine: (no-preload-440870) Calling .DriverName
	I0429 00:54:34.224366   68399 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 00:54:34.224382   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetState
	I0429 00:54:34.225806   68399 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 00:54:34.225819   68399 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 00:54:34.225825   68399 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 00:54:34.225837   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHHostname
	I0429 00:54:34.228181   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.228579   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:34.228623   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.228758   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHPort
	I0429 00:54:34.228923   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:34.229067   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:34.229236   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHUsername
	I0429 00:54:34.229398   68399 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:34.229602   68399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0429 00:54:34.229616   68399 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 00:54:34.341602   68399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:54:34.341629   68399 main.go:141] libmachine: Detecting the provisioner...
	I0429 00:54:34.341641   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHHostname
	I0429 00:54:34.344384   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.344750   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:34.344785   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.344915   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHPort
	I0429 00:54:34.345126   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:34.345295   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:34.345468   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHUsername
	I0429 00:54:34.345606   68399 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:34.345822   68399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0429 00:54:34.345836   68399 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 00:54:34.459299   68399 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 00:54:34.459413   68399 main.go:141] libmachine: found compatible host: buildroot
	I0429 00:54:34.459429   68399 main.go:141] libmachine: Provisioning with buildroot...
	I0429 00:54:34.459444   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetMachineName
	I0429 00:54:34.459670   68399 buildroot.go:166] provisioning hostname "no-preload-440870"
	I0429 00:54:34.459691   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetMachineName
	I0429 00:54:34.459850   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHHostname
	I0429 00:54:34.462495   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.462813   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:34.462849   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.462985   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHPort
	I0429 00:54:34.463241   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:34.463455   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:34.463607   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHUsername
	I0429 00:54:34.463808   68399 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:34.464015   68399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0429 00:54:34.464032   68399 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-440870 && echo "no-preload-440870" | sudo tee /etc/hostname
	I0429 00:54:34.591827   68399 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-440870
	
	I0429 00:54:34.591855   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHHostname
	I0429 00:54:34.594487   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.594845   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:34.594878   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.595035   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHPort
	I0429 00:54:34.595254   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:34.595421   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:34.595586   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHUsername
	I0429 00:54:34.595734   68399 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:34.595915   68399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0429 00:54:34.595932   68399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-440870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-440870/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-440870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 00:54:34.717243   68399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:54:34.717284   68399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0429 00:54:34.717348   68399 buildroot.go:174] setting up certificates
	I0429 00:54:34.717363   68399 provision.go:84] configureAuth start
	I0429 00:54:34.717383   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetMachineName
	I0429 00:54:34.717643   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetIP
	I0429 00:54:34.720344   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.720680   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:34.720708   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.720799   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHHostname
	I0429 00:54:34.722902   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.723233   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:34.723264   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.723411   68399 provision.go:143] copyHostCerts
	I0429 00:54:34.723477   68399 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0429 00:54:34.723487   68399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:54:34.723536   68399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0429 00:54:34.723632   68399 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0429 00:54:34.723641   68399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:54:34.723660   68399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0429 00:54:34.723738   68399 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0429 00:54:34.723747   68399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:54:34.723763   68399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0429 00:54:34.723806   68399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.no-preload-440870 san=[127.0.0.1 192.168.72.33 localhost minikube no-preload-440870]
	I0429 00:54:34.974454   68399 provision.go:177] copyRemoteCerts
	I0429 00:54:34.974512   68399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 00:54:34.974535   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHHostname
	I0429 00:54:34.977004   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.977368   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:34.977401   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:34.977601   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHPort
	I0429 00:54:34.977835   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:34.978068   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHUsername
	I0429 00:54:34.978259   68399 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870/id_rsa Username:docker}
	I0429 00:54:35.069261   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 00:54:35.098063   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 00:54:35.126509   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 00:54:35.153529   68399 provision.go:87] duration metric: took 436.148195ms to configureAuth
	I0429 00:54:35.153556   68399 buildroot.go:189] setting minikube options for container-runtime
	I0429 00:54:35.153711   68399 config.go:182] Loaded profile config "no-preload-440870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:54:35.153775   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHHostname
	I0429 00:54:35.156430   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.156794   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:35.156826   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.156952   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHPort
	I0429 00:54:35.157156   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:35.157330   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:35.157463   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHUsername
	I0429 00:54:35.157660   68399 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:35.157854   68399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0429 00:54:35.157872   68399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 00:54:35.450702   68399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 00:54:35.450743   68399 main.go:141] libmachine: Checking connection to Docker...
	I0429 00:54:35.450754   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetURL
	I0429 00:54:35.452071   68399 main.go:141] libmachine: (no-preload-440870) DBG | Using libvirt version 6000000
	I0429 00:54:35.454353   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.454694   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:35.454729   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.454914   68399 main.go:141] libmachine: Docker is up and running!
	I0429 00:54:35.454933   68399 main.go:141] libmachine: Reticulating splines...
	I0429 00:54:35.454940   68399 client.go:171] duration metric: took 25.005901938s to LocalClient.Create
	I0429 00:54:35.454960   68399 start.go:167] duration metric: took 25.005968642s to libmachine.API.Create "no-preload-440870"
	I0429 00:54:35.454969   68399 start.go:293] postStartSetup for "no-preload-440870" (driver="kvm2")
	I0429 00:54:35.454979   68399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 00:54:35.454991   68399 main.go:141] libmachine: (no-preload-440870) Calling .DriverName
	I0429 00:54:35.455204   68399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 00:54:35.455227   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHHostname
	I0429 00:54:35.457204   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.457507   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:35.457541   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.457633   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHPort
	I0429 00:54:35.457799   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:35.457931   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHUsername
	I0429 00:54:35.458068   68399 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870/id_rsa Username:docker}
	I0429 00:54:35.545030   68399 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 00:54:35.549423   68399 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 00:54:35.549445   68399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0429 00:54:35.549520   68399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0429 00:54:35.549612   68399 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0429 00:54:35.549728   68399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 00:54:35.560480   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:54:35.586865   68399 start.go:296] duration metric: took 131.886163ms for postStartSetup
	I0429 00:54:35.586918   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetConfigRaw
	I0429 00:54:35.587537   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetIP
	I0429 00:54:35.590380   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.590755   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:35.590781   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.591063   68399 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/no-preload-440870/config.json ...
	I0429 00:54:35.591246   68399 start.go:128] duration metric: took 25.166940919s to createHost
	I0429 00:54:35.591270   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHHostname
	I0429 00:54:35.593307   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.593658   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:35.593689   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.593864   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHPort
	I0429 00:54:35.594037   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:35.594162   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:35.594281   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHUsername
	I0429 00:54:35.594422   68399 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:35.594588   68399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0429 00:54:35.594600   68399 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 00:54:35.707718   68399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714352075.685888689
	
	I0429 00:54:35.707739   68399 fix.go:216] guest clock: 1714352075.685888689
	I0429 00:54:35.707748   68399 fix.go:229] Guest: 2024-04-29 00:54:35.685888689 +0000 UTC Remote: 2024-04-29 00:54:35.591257504 +0000 UTC m=+47.423540495 (delta=94.631185ms)
	I0429 00:54:35.707802   68399 fix.go:200] guest clock delta is within tolerance: 94.631185ms
	I0429 00:54:35.707809   68399 start.go:83] releasing machines lock for "no-preload-440870", held for 25.283689849s
	I0429 00:54:35.707839   68399 main.go:141] libmachine: (no-preload-440870) Calling .DriverName
	I0429 00:54:35.708142   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetIP
	I0429 00:54:35.710898   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.711240   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:35.711272   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.711446   68399 main.go:141] libmachine: (no-preload-440870) Calling .DriverName
	I0429 00:54:35.711918   68399 main.go:141] libmachine: (no-preload-440870) Calling .DriverName
	I0429 00:54:35.712114   68399 main.go:141] libmachine: (no-preload-440870) Calling .DriverName
	I0429 00:54:35.712205   68399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 00:54:35.712246   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHHostname
	I0429 00:54:35.712319   68399 ssh_runner.go:195] Run: cat /version.json
	I0429 00:54:35.712352   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHHostname
	I0429 00:54:35.714964   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.715283   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.715313   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:35.715344   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.715448   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHPort
	I0429 00:54:35.715605   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:35.715658   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:35.715689   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:35.715870   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHUsername
	I0429 00:54:35.715870   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHPort
	I0429 00:54:35.716038   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHKeyPath
	I0429 00:54:35.716040   68399 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870/id_rsa Username:docker}
	I0429 00:54:35.716162   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetSSHUsername
	I0429 00:54:35.716294   68399 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/no-preload-440870/id_rsa Username:docker}
	I0429 00:54:35.827597   68399 ssh_runner.go:195] Run: systemctl --version
	I0429 00:54:35.834084   68399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 00:54:35.999010   68399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 00:54:36.008582   68399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 00:54:36.008644   68399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 00:54:36.028325   68399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 00:54:36.028352   68399 start.go:494] detecting cgroup driver to use...
	I0429 00:54:36.028413   68399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 00:54:36.045556   68399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 00:54:36.061453   68399 docker.go:217] disabling cri-docker service (if available) ...
	I0429 00:54:36.061508   68399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 00:54:36.076131   68399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 00:54:36.090915   68399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 00:54:36.212583   68399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 00:54:36.368104   68399 docker.go:233] disabling docker service ...
	I0429 00:54:36.368174   68399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 00:54:36.384248   68399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 00:54:36.398902   68399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 00:54:36.558649   68399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 00:54:36.684062   68399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 00:54:36.702001   68399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 00:54:36.724601   68399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 00:54:36.724671   68399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:36.739473   68399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 00:54:36.739546   68399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:36.753005   68399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:36.766591   68399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:36.781497   68399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 00:54:36.794568   68399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:36.808695   68399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:36.830684   68399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:36.842730   68399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 00:54:36.853062   68399 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 00:54:36.853118   68399 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 00:54:36.868483   68399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 00:54:36.879234   68399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:54:37.003634   68399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 00:54:37.165809   68399 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 00:54:37.165882   68399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 00:54:37.171244   68399 start.go:562] Will wait 60s for crictl version
	I0429 00:54:37.171306   68399 ssh_runner.go:195] Run: which crictl
	I0429 00:54:37.175362   68399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 00:54:37.215948   68399 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 00:54:37.216031   68399 ssh_runner.go:195] Run: crio --version
	I0429 00:54:37.249027   68399 ssh_runner.go:195] Run: crio --version
	I0429 00:54:37.281466   68399 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 00:54:37.282566   68399 main.go:141] libmachine: (no-preload-440870) Calling .GetIP
	I0429 00:54:37.285295   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:37.285714   68399 main.go:141] libmachine: (no-preload-440870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:d8:5f", ip: ""} in network mk-no-preload-440870: {Iface:virbr4 ExpiryTime:2024-04-29 01:54:27 +0000 UTC Type:0 Mac:52:54:00:be:d8:5f Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-440870 Clientid:01:52:54:00:be:d8:5f}
	I0429 00:54:37.285753   68399 main.go:141] libmachine: (no-preload-440870) DBG | domain no-preload-440870 has defined IP address 192.168.72.33 and MAC address 52:54:00:be:d8:5f in network mk-no-preload-440870
	I0429 00:54:37.285963   68399 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 00:54:37.290793   68399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 00:54:37.304538   68399 kubeadm.go:877] updating cluster {Name:no-preload-440870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-440870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.33 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 00:54:37.304637   68399 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:54:37.304675   68399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:54:37.338783   68399 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 00:54:37.338813   68399 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 00:54:37.338861   68399 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 00:54:37.338879   68399 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 00:54:37.338905   68399 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 00:54:37.338927   68399 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 00:54:37.338937   68399 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 00:54:37.338911   68399 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 00:54:37.338967   68399 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 00:54:37.338974   68399 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 00:54:37.340207   68399 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 00:54:37.340204   68399 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 00:54:37.340348   68399 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 00:54:37.340360   68399 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 00:54:37.340366   68399 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 00:54:37.340372   68399 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 00:54:37.340371   68399 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 00:54:37.340402   68399 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 00:54:37.445161   68399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0429 00:54:37.445434   68399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0429 00:54:37.448347   68399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0429 00:54:37.448941   68399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 00:54:37.450158   68399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0429 00:54:37.456605   68399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0429 00:54:37.470647   68399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0429 00:54:37.621807   68399 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0429 00:54:37.621847   68399 cri.go:232] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 00:54:37.621866   68399 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0429 00:54:37.621896   68399 ssh_runner.go:195] Run: which crictl
	I0429 00:54:37.621899   68399 cri.go:232] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0429 00:54:37.621991   68399 ssh_runner.go:195] Run: which crictl
	I0429 00:54:37.661247   68399 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0429 00:54:37.661273   68399 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0429 00:54:37.661290   68399 cri.go:232] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 00:54:37.661305   68399 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0429 00:54:37.661329   68399 cri.go:232] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 00:54:37.661337   68399 ssh_runner.go:195] Run: which crictl
	I0429 00:54:37.661358   68399 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0429 00:54:37.661388   68399 cri.go:232] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 00:54:37.661368   68399 ssh_runner.go:195] Run: which crictl
	I0429 00:54:37.661424   68399 ssh_runner.go:195] Run: which crictl
	I0429 00:54:37.661311   68399 cri.go:232] Removing image: registry.k8s.io/pause:3.9
	I0429 00:54:37.661473   68399 ssh_runner.go:195] Run: which crictl
	I0429 00:54:37.676677   68399 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0429 00:54:37.676712   68399 cri.go:232] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 00:54:37.676731   68399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0429 00:54:37.676746   68399 ssh_runner.go:195] Run: which crictl
	I0429 00:54:37.676826   68399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0429 00:54:37.676843   68399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0429 00:54:37.676868   68399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0429 00:54:37.676890   68399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 00:54:37.676909   68399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I0429 00:54:37.787578   68399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0429 00:54:37.787723   68399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 00:54:37.787820   68399 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 00:54:37.806012   68399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0429 00:54:37.806145   68399 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0429 00:54:37.810766   68399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 00:54:37.810783   68399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 00:54:37.810848   68399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0429 00:54:37.810864   68399 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 00:54:37.810867   68399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 00:54:37.810929   68399 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I0429 00:54:37.810947   68399 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 00:54:37.810872   68399 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0429 00:54:37.847893   68399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 00:54:37.847953   68399 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.30.0': No such file or directory
	I0429 00:54:37.847981   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 --> /var/lib/minikube/images/kube-scheduler_v1.30.0 (19219456 bytes)
	I0429 00:54:37.848001   68399 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 00:54:37.848014   68399 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.12-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.12-0': No such file or directory
	I0429 00:54:37.848045   68399 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.30.0': No such file or directory
	I0429 00:54:37.848062   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 --> /var/lib/minikube/images/kube-controller-manager_v1.30.0 (31041024 bytes)
	I0429 00:54:37.848073   68399 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0429 00:54:37.848091   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I0429 00:54:37.848045   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 --> /var/lib/minikube/images/etcd_3.5.12-0 (57244160 bytes)
	I0429 00:54:37.848150   68399 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0429 00:54:37.848168   68399 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.30.0': No such file or directory
	I0429 00:54:37.848170   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0429 00:54:37.848182   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 --> /var/lib/minikube/images/kube-apiserver_v1.30.0 (32674304 bytes)
	I0429 00:54:37.869673   68399 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.30.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.30.0': No such file or directory
	I0429 00:54:37.869716   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 --> /var/lib/minikube/images/kube-proxy_v1.30.0 (29022720 bytes)
	I0429 00:54:37.955604   68399 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.9
	I0429 00:54:37.955662   68399 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.9
	I0429 00:54:38.214077   68399 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 00:54:35.997545   68628 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-219055
	
	I0429 00:54:35.997577   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:54:36.000990   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:36.001388   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:36.001420   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:36.001653   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:54:36.001842   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:36.002007   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:36.002186   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:54:36.002356   68628 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:36.002560   68628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I0429 00:54:36.002586   68628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-219055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-219055/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-219055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 00:54:36.115548   68628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:54:36.115575   68628 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0429 00:54:36.115620   68628 buildroot.go:174] setting up certificates
	I0429 00:54:36.115640   68628 provision.go:84] configureAuth start
	I0429 00:54:36.115659   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetMachineName
	I0429 00:54:36.115963   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetIP
	I0429 00:54:36.118750   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:36.119194   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:36.119221   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:36.119385   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:54:36.121308   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:36.121666   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:36.121704   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:36.121874   68628 provision.go:143] copyHostCerts
	I0429 00:54:36.121937   68628 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0429 00:54:36.121950   68628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:54:36.122011   68628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0429 00:54:36.122126   68628 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0429 00:54:36.122138   68628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:54:36.122158   68628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0429 00:54:36.122214   68628 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0429 00:54:36.122221   68628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:54:36.122238   68628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0429 00:54:36.122279   68628 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-219055 san=[127.0.0.1 192.168.50.69 kubernetes-upgrade-219055 localhost minikube]
	I0429 00:54:36.331549   68628 provision.go:177] copyRemoteCerts
	I0429 00:54:36.331605   68628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 00:54:36.331626   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:54:36.334383   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:36.334750   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:36.334784   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:36.334954   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:54:36.335141   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:36.335308   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:54:36.335471   68628 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/id_rsa Username:docker}
	I0429 00:54:36.423028   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 00:54:36.457150   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0429 00:54:36.495523   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 00:54:36.523506   68628 provision.go:87] duration metric: took 407.84825ms to configureAuth
	I0429 00:54:36.523539   68628 buildroot.go:189] setting minikube options for container-runtime
	I0429 00:54:36.523770   68628 config.go:182] Loaded profile config "kubernetes-upgrade-219055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:54:36.523887   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:54:36.526972   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:36.527369   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:36.527410   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:36.527611   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:54:36.527794   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:36.527910   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:36.528054   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:54:36.528257   68628 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:36.528467   68628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I0429 00:54:36.528501   68628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 00:54:38.798689   68399 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I0429 00:54:38.798732   68399 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0429 00:54:38.798795   68399 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0429 00:54:38.798845   68399 cri.go:232] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 00:54:38.798891   68399 ssh_runner.go:195] Run: which crictl
	I0429 00:54:38.798801   68399 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0429 00:54:38.804364   68399 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 00:54:40.799350   68399 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.000414387s)
	I0429 00:54:40.799383   68399 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0429 00:54:40.799402   68399 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.995010477s)
	I0429 00:54:40.799413   68399 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 00:54:40.799460   68399 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 00:54:40.799485   68399 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 00:54:40.799546   68399 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0429 00:54:42.763234   68628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 00:54:42.763258   68628 machine.go:97] duration metric: took 7.028409516s to provisionDockerMachine
	I0429 00:54:42.763269   68628 start.go:293] postStartSetup for "kubernetes-upgrade-219055" (driver="kvm2")
	I0429 00:54:42.763280   68628 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 00:54:42.763303   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:54:42.763635   68628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 00:54:42.763665   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:54:42.767088   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:42.767550   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:42.767576   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:42.767810   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:54:42.768001   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:42.768222   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:54:42.768415   68628 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/id_rsa Username:docker}
	I0429 00:54:42.857994   68628 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 00:54:42.862994   68628 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 00:54:42.863023   68628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0429 00:54:42.863104   68628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0429 00:54:42.863204   68628 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0429 00:54:42.863320   68628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 00:54:42.874387   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:54:42.904198   68628 start.go:296] duration metric: took 140.915977ms for postStartSetup
	I0429 00:54:42.904241   68628 fix.go:56] duration metric: took 7.196279579s for fixHost
	I0429 00:54:42.904263   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:54:42.907354   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:42.907787   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:42.907826   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:42.907963   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:54:42.908136   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:42.908334   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:42.908490   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:54:42.908682   68628 main.go:141] libmachine: Using SSH client type: native
	I0429 00:54:42.908861   68628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I0429 00:54:42.908874   68628 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 00:54:43.023813   68628 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714352083.021098003
	
	I0429 00:54:43.023837   68628 fix.go:216] guest clock: 1714352083.021098003
	I0429 00:54:43.023847   68628 fix.go:229] Guest: 2024-04-29 00:54:43.021098003 +0000 UTC Remote: 2024-04-29 00:54:42.904245343 +0000 UTC m=+37.068299027 (delta=116.85266ms)
	I0429 00:54:43.023871   68628 fix.go:200] guest clock delta is within tolerance: 116.85266ms
	I0429 00:54:43.023878   68628 start.go:83] releasing machines lock for "kubernetes-upgrade-219055", held for 7.31594932s
	I0429 00:54:43.023904   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:54:43.024165   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetIP
	I0429 00:54:43.027167   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:43.027607   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:43.027643   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:43.027887   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:54:43.028535   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:54:43.028774   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:54:43.028896   68628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 00:54:43.028942   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:54:43.029017   68628 ssh_runner.go:195] Run: cat /version.json
	I0429 00:54:43.029043   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHHostname
	I0429 00:54:43.031888   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:43.032026   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:43.032333   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:43.032363   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:43.032389   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:43.032406   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:43.032681   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:54:43.032761   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHPort
	I0429 00:54:43.032881   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:43.032980   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:54:43.033043   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHKeyPath
	I0429 00:54:43.033118   68628 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/id_rsa Username:docker}
	I0429 00:54:43.033215   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetSSHUsername
	I0429 00:54:43.033354   68628 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/kubernetes-upgrade-219055/id_rsa Username:docker}
	I0429 00:54:43.117366   68628 ssh_runner.go:195] Run: systemctl --version
	I0429 00:54:43.142902   68628 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 00:54:43.315428   68628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 00:54:43.323513   68628 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 00:54:43.323566   68628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 00:54:43.334178   68628 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 00:54:43.334199   68628 start.go:494] detecting cgroup driver to use...
	I0429 00:54:43.334257   68628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 00:54:43.352839   68628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 00:54:43.368119   68628 docker.go:217] disabling cri-docker service (if available) ...
	I0429 00:54:43.368163   68628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 00:54:43.384173   68628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 00:54:43.398712   68628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 00:54:43.571565   68628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 00:54:43.744374   68628 docker.go:233] disabling docker service ...
	I0429 00:54:43.744454   68628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 00:54:43.762151   68628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 00:54:43.777983   68628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 00:54:43.919860   68628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 00:54:44.063258   68628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 00:54:44.083584   68628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 00:54:44.107047   68628 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 00:54:44.107107   68628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:44.122748   68628 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 00:54:44.122813   68628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:44.136514   68628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:44.151188   68628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:44.165177   68628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 00:54:44.179985   68628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:44.192194   68628 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:44.204284   68628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:54:44.216810   68628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 00:54:44.228142   68628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 00:54:44.241167   68628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:54:44.398870   68628 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 00:54:44.753442   68628 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 00:54:44.753517   68628 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 00:54:44.759309   68628 start.go:562] Will wait 60s for crictl version
	I0429 00:54:44.759350   68628 ssh_runner.go:195] Run: which crictl
	I0429 00:54:44.764845   68628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 00:54:44.813033   68628 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 00:54:44.813137   68628 ssh_runner.go:195] Run: crio --version
	I0429 00:54:44.845317   68628 ssh_runner.go:195] Run: crio --version
	I0429 00:54:44.881183   68628 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 00:54:44.882497   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .GetIP
	I0429 00:54:44.885157   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:44.885578   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:36:0e", ip: ""} in network mk-kubernetes-upgrade-219055: {Iface:virbr2 ExpiryTime:2024-04-29 01:48:59 +0000 UTC Type:0 Mac:52:54:00:b1:36:0e Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kubernetes-upgrade-219055 Clientid:01:52:54:00:b1:36:0e}
	I0429 00:54:44.885609   68628 main.go:141] libmachine: (kubernetes-upgrade-219055) DBG | domain kubernetes-upgrade-219055 has defined IP address 192.168.50.69 and MAC address 52:54:00:b1:36:0e in network mk-kubernetes-upgrade-219055
	I0429 00:54:44.885829   68628 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0429 00:54:44.890853   68628 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:kubernetes-upgrade-219055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 00:54:44.890987   68628 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:54:44.891059   68628 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:54:44.944556   68628 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:54:44.944633   68628 crio.go:433] Images already preloaded, skipping extraction
	I0429 00:54:44.944711   68628 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:54:44.988458   68628 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:54:44.988487   68628 cache_images.go:84] Images are preloaded, skipping loading
	I0429 00:54:44.988496   68628 kubeadm.go:928] updating node { 192.168.50.69 8443 v1.30.0 crio true true} ...
	I0429 00:54:44.988634   68628 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-219055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-219055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 00:54:44.988713   68628 ssh_runner.go:195] Run: crio config
	I0429 00:54:45.046168   68628 cni.go:84] Creating CNI manager for ""
	I0429 00:54:45.046204   68628 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 00:54:45.046219   68628 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 00:54:45.046244   68628 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.69 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-219055 NodeName:kubernetes-upgrade-219055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 00:54:45.046459   68628 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-219055"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 00:54:45.046546   68628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 00:54:45.062187   68628 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 00:54:45.062275   68628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 00:54:45.073961   68628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0429 00:54:45.093667   68628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 00:54:45.112408   68628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0429 00:54:45.130353   68628 ssh_runner.go:195] Run: grep 192.168.50.69	control-plane.minikube.internal$ /etc/hosts
	I0429 00:54:45.135023   68628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:54:45.302057   68628 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 00:54:45.320135   68628 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055 for IP: 192.168.50.69
	I0429 00:54:45.320157   68628 certs.go:194] generating shared ca certs ...
	I0429 00:54:45.320172   68628 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:54:45.320327   68628 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0429 00:54:45.320366   68628 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0429 00:54:45.320380   68628 certs.go:256] generating profile certs ...
	I0429 00:54:45.320465   68628 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/client.key
	I0429 00:54:45.320512   68628 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.key.752b27af
	I0429 00:54:45.320543   68628 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/proxy-client.key
	I0429 00:54:45.320651   68628 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0429 00:54:45.320684   68628 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0429 00:54:45.320693   68628 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 00:54:45.320717   68628 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0429 00:54:45.320743   68628 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0429 00:54:45.320764   68628 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0429 00:54:45.320802   68628 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:54:45.321383   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 00:54:45.359350   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 00:54:45.388100   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 00:54:45.415980   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 00:54:45.442143   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 00:54:45.468962   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 00:54:45.572990   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 00:54:45.779376   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/kubernetes-upgrade-219055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 00:54:43.587900   68399 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.788389826s)
	I0429 00:54:43.587927   68399 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0429 00:54:43.587953   68399 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 00:54:43.587959   68399 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.788389777s)
	I0429 00:54:43.587991   68399 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0429 00:54:43.587999   68399 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 00:54:43.588016   68399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0429 00:54:46.288915   68399 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.700882394s)
	I0429 00:54:46.288953   68399 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17977-13393/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0429 00:54:46.288985   68399 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 00:54:46.289038   68399 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 00:54:45.995162   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0429 00:54:46.150509   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 00:54:46.234745   68628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0429 00:54:46.559009   68628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 00:54:46.654295   68628 ssh_runner.go:195] Run: openssl version
	I0429 00:54:46.685328   68628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 00:54:46.788533   68628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:54:46.837368   68628 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:54:46.837427   68628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:54:46.856708   68628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 00:54:46.882791   68628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0429 00:54:46.904401   68628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0429 00:54:46.913557   68628 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0429 00:54:46.913640   68628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0429 00:54:46.922588   68628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0429 00:54:46.937409   68628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0429 00:54:46.954156   68628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0429 00:54:46.959923   68628 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0429 00:54:46.959986   68628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0429 00:54:46.968022   68628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 00:54:46.989896   68628 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 00:54:47.006924   68628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 00:54:47.016171   68628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 00:54:47.035434   68628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 00:54:47.049084   68628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 00:54:47.060518   68628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 00:54:47.071547   68628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 00:54:47.085414   68628 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0 ClusterName:kubernetes-upgrade-219055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:54:47.085511   68628 cri.go:56] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 00:54:47.085567   68628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 00:54:47.215215   68628 cri.go:91] found id: "31d46b0d1d1bd9ad5aa671ece7c263a745dff8e1f866b191e11bfe547a5ecd53"
	I0429 00:54:47.215244   68628 cri.go:91] found id: "7571d7724aa7d4cc10c10fcae27d84da174f49f61051899142ef85fa5ef4a406"
	I0429 00:54:47.215250   68628 cri.go:91] found id: "797d5a909c03edbbef586503df1e89348700f8dcec7b065738aee52003fa6e6e"
	I0429 00:54:47.215256   68628 cri.go:91] found id: "70e959c8357ff6d4dcf1c18826680737dacae0618dd481785ca0917c1960c060"
	I0429 00:54:47.215260   68628 cri.go:91] found id: "c3966ddcb3f1621d5360af4c50636fc8b93d2256267b432313b7c890a4530f10"
	I0429 00:54:47.215265   68628 cri.go:91] found id: "da4bbd47c103a32a92699550ab6a22de2d5420e831c27e436e9419eaa1b67221"
	I0429 00:54:47.215278   68628 cri.go:91] found id: "51e1217b8a9c1f25595749037ff242f1b22e7a26b445b96ba7bbd529d58496ce"
	I0429 00:54:47.215283   68628 cri.go:91] found id: "a1ffa9715cc038d50873c49b5f936cc7b1d237860850e1e9deee553fb2f0631a"
	I0429 00:54:47.215290   68628 cri.go:91] found id: "c32a30d2b7ca88ade64062cec43a8a16bfe4f9e4945ef59b4468806cafe51216"
	I0429 00:54:47.215305   68628 cri.go:91] found id: "73dc29f6d13081cd8521eedf0bb7e788805fe5b35fb98473fc707e266f03cc45"
	I0429 00:54:47.215313   68628 cri.go:91] found id: "4e086b0e19947cc504cc4459a8e6173ab58d4108fc2a1ea3975520ed58c044e7"
	I0429 00:54:47.215318   68628 cri.go:91] found id: "4ceda776e06e39c4db75b37e6d4ab0198a15c27a67d4913403960c2e170e1c44"
	I0429 00:54:47.215322   68628 cri.go:91] found id: ""
	I0429 00:54:47.215385   68628 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.627593220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0cc84c0d-be51-494c-b7a3-c0d5163ebc7b name=/runtime.v1.RuntimeService/Version
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.629020462Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b157b958-67c6-417d-a09f-22a48e4f2f71 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.630039274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714352117630014665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b157b958-67c6-417d-a09f-22a48e4f2f71 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.630714367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df64969b-2790-4fb1-8187-ba81cfb3ca56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.630769259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df64969b-2790-4fb1-8187-ba81cfb3ca56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.631089648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f96be968381f40ef4d173a0c7c64eaf4fc0d989f78dc444984340f6bc6366d0,PodSandboxId:d3570cc8acfe53640296afbb27f35a665d7da31336578f978e6c8c09cffaad54,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714352114746839785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njsmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e671fc1-c9ac-460c-a8eb-e594b9c93add,},Annotations:map[string]string{io.kubernetes.container.hash: dd5b421f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71506230c089ff5ef047b42344ceb6f16bf71cfb8d345197f9c497d1582962b6,PodSandboxId:7b65053ac0a36b7c116c4b229ee149fb7de59a02c273e0b76699486039a45f75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714352114723954734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 2e11ec4a-41b7-410b-a315-e0ad8d33bd41,},Annotations:map[string]string{io.kubernetes.container.hash: f823047d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1ecc05c5f11d2df00671e9b1de4efd1b179a922b769f2779bcc040d8c321ee,PodSandboxId:764c4604dee732d92e516b51e6cf615a36c2a043cfe9cb3373ac2a0847aba370,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714352110961792140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219055,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: a8abd3cb1d8d8f1500b5e2b175363e8b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef84e8db2398b3fe9aa11e2b4c0ac80769f2959ad09a3352f3d8568c9303c2,PodSandboxId:039874a9316410cb97e1d4f8a2df90a30dfd141f144a8ded10b1c082fb23692d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714352110946237345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219055,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa668b2927ed2bad5b3bb87f0b39e289,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e685ea24234fd096d41d2bd6ca07d0ea778b9a991b8c8f966010a73a235fdc,PodSandboxId:127bb0e95e76a3cd90464d54ecabba6fbd2093dc057146cf45b04f4ab9c12c90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714352110957966892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219055,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2166d02f044cc1a890a08a588fd3dedb,},Annotations:map[string]string{io.kubernetes.container.hash: 322f7f41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ddaab0e4c279688d01e6ad3d7c723427648511ab8124b2651b13384b7e3991,PodSandboxId:2a18ccf84b54080797605f36078d416ccf486a4bbb485427caa4a012401fa025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714352108269953952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gsr5j,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 7319db68-b41b-4cab-843b-8b58b19b6f33,},Annotations:map[string]string{io.kubernetes.container.hash: d523e5a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f2dbf9657c3490c219550429abca6b7a214af64d31ee7b16887e7a60cb5e9c,PodSandboxId:bdea68bf05d359fd2ef59e8d7f53f4c4aa74178d58ea2e4a49c54c836e004249,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUN
NING,CreatedAt:1714352108220212792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b7156bd54cdeebeb5ee0d0580d8b84,},Annotations:map[string]string{io.kubernetes.container.hash: a32ce140,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8abe8996e5ee2db2787ff4c68eb174f393ec8cac2233b906a226dc8d036e188e,PodSandboxId:8b433813ca7cb1d0bd78ce2a70da18b52432e61ea2f8d01ea3f59c545f67a6eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:17143520862473
75360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e79833,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0ccd21e2c6825e5710ad6f34d146541163fa6b3c9c8122a032cde61915e824,PodSandboxId:2a18ccf84b54080797605f36078d416ccf486a4bbb485427caa4a012401fa025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714352087287034285,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gsr5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7319db68-b41b-4cab-843b-8b58b19b6f33,},Annotations:map[string]string{io.kubernetes.container.hash: d523e5a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29de8c2cb3fb86681ad4380aedef5462464b0d19ec332ea3a43e6510141b51,PodSandboxId:7b65053ac0a36b7c116c4b229ee149fb7de59a02c273e0b76699486039a45f75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714352086289884966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e11ec4a-41b7-410b-a315-e0ad8d33bd41,},Annotations:map[string]string{io.kubernetes.container.hash: f823047d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ce2172404199076059d353f2746967be081e7f20ca01371e0bf0d28e27b68e,PodSandboxId:d3570cc8acfe53640296afbb27f35a665d7da31336578f978e6c8c09cffaad54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714352087148844989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njsmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e671fc1-c9ac-460c-a8eb-e594b9c93add,},Annotations:map[string]string{io.kubernetes.container.hash: dd5b421f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d46b0d1d1bd9ad5aa671ece7c263a745dff8e1f866b191e11bfe547a5ecd53,PodSandboxId:039874a9316410cb97e1d4f8a2df90a30dfd141f144a8ded10b1c082fb23
692d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714352086321423814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa668b2927ed2bad5b3bb87f0b39e289,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7571d7724aa7d4cc10c10fcae27d84da174f49f61051899142ef85fa5ef4a406,PodSandboxId:764c4604dee732d92e516b51e6cf615a36c2a04
3cfe9cb3373ac2a0847aba370,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714352086296447048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8abd3cb1d8d8f1500b5e2b175363e8b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797d5a909c03edbbef586503df1e89348700f8dcec7b065738aee52003fa6e6e,PodSandboxId:bdea68bf05d359fd2ef59e8d7f53f4c4aa74178d58ea2
e4a49c54c836e004249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714352086238636078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b7156bd54cdeebeb5ee0d0580d8b84,},Annotations:map[string]string{io.kubernetes.container.hash: a32ce140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e959c8357ff6d4dcf1c18826680737dacae0618dd481785ca0917c1960c060,PodSandboxId:127bb0e95e76a3cd90464d54ecabba6fbd2093dc057146cf45b04f4ab9c12c90,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714352086204389970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2166d02f044cc1a890a08a588fd3dedb,},Annotations:map[string]string{io.kubernetes.container.hash: 322f7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1217b8a9c1f25595749037ff242f1b22e7a26b445b96ba7bbd529d58496ce,PodSandboxId:6c1372694d04948f37c1005e1d58fdd6455d2fb6f7e0590a41c6052147415dcf,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714352055486933538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e79833,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df64969b-2790-4fb1-8187-ba81cfb3ca56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.686216832Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bbbe7dc7-46ea-4284-b7e9-511cff7a02cf name=/runtime.v1.RuntimeService/Version
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.686383109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbbe7dc7-46ea-4284-b7e9-511cff7a02cf name=/runtime.v1.RuntimeService/Version
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.688661395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60fd91fb-e5a3-412f-bcfb-705a1a70a39f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.689478812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714352117689450274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60fd91fb-e5a3-412f-bcfb-705a1a70a39f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.689707530Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=bffe15e8-5d73-4a02-9473-3d0844fc36c8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.689932583Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d3570cc8acfe53640296afbb27f35a665d7da31336578f978e6c8c09cffaad54,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-njsmk,Uid:6e671fc1-c9ac-460c-a8eb-e594b9c93add,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714352085953170960,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-njsmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e671fc1-c9ac-460c-a8eb-e594b9c93add,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T00:54:14.924040212Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a18ccf84b54080797605f36078d416ccf486a4bbb485427caa4a012401fa025,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-gsr5j,Uid:7319db68-b41b-4cab-843b-8b58b19b6f33,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714352085942478482,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-gsr5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7319db68-b41b-4cab-843b-8b58b19b6f33,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T00:54:14.867633462Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:764c4604dee732d92e516b51e6cf615a36c2a043cfe9cb3373ac2a0847aba370,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-219055,Uid:a8abd3cb1d8d8f1500b5e2b175363e8b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714352085770875345,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8abd3cb1d8d8f1500b5e2b175363e8b,tier: control-plane,},Ann
otations:map[string]string{kubernetes.io/config.hash: a8abd3cb1d8d8f1500b5e2b175363e8b,kubernetes.io/config.seen: 2024-04-29T00:53:55.171894752Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8b433813ca7cb1d0bd78ce2a70da18b52432e61ea2f8d01ea3f59c545f67a6eb,Metadata:&PodSandboxMetadata{Name:kube-proxy-xfs78,Uid:5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714352085705248872,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xfs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T00:54:14.785334499Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b65053ac0a36b7c116c4b229ee149fb7de59a02c273e0b76699486039a45f75,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2e1
1ec4a-41b7-410b-a315-e0ad8d33bd41,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714352085626774118,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e11ec4a-41b7-410b-a315-e0ad8d33bd41,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountN
ame\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-29T00:54:13.988701493Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:127bb0e95e76a3cd90464d54ecabba6fbd2093dc057146cf45b04f4ab9c12c90,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-219055,Uid:2166d02f044cc1a890a08a588fd3dedb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714352085571188379,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2166d02f044cc1a890a08a588fd3dedb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.69:8443,kubernetes.io/config.hash: 2166d02f044cc1a890a08a588fd3dedb,kubernetes.io/config.seen: 2024-04-29T00:53:55.171892782Z,kubernetes.io
/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bdea68bf05d359fd2ef59e8d7f53f4c4aa74178d58ea2e4a49c54c836e004249,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-219055,Uid:e9b7156bd54cdeebeb5ee0d0580d8b84,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714352085531140510,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b7156bd54cdeebeb5ee0d0580d8b84,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.69:2379,kubernetes.io/config.hash: e9b7156bd54cdeebeb5ee0d0580d8b84,kubernetes.io/config.seen: 2024-04-29T00:53:55.171887873Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:039874a9316410cb97e1d4f8a2df90a30dfd141f144a8ded10b1c082fb23692d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-219055,Uid:aa668b2927ed2bad5b3bb
87f0b39e289,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714352085520762166,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa668b2927ed2bad5b3bb87f0b39e289,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aa668b2927ed2bad5b3bb87f0b39e289,kubernetes.io/config.seen: 2024-04-29T00:53:55.171893911Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6c1372694d04948f37c1005e1d58fdd6455d2fb6f7e0590a41c6052147415dcf,Metadata:&PodSandboxMetadata{Name:kube-proxy-xfs78,Uid:5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714352055097989110,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xfs78,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T00:54:14.785334499Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=bffe15e8-5d73-4a02-9473-3d0844fc36c8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.690654870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e85cfd84-2e13-4956-9836-14a69b647bc7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.690741123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e85cfd84-2e13-4956-9836-14a69b647bc7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.691086609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f96be968381f40ef4d173a0c7c64eaf4fc0d989f78dc444984340f6bc6366d0,PodSandboxId:d3570cc8acfe53640296afbb27f35a665d7da31336578f978e6c8c09cffaad54,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714352114746839785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njsmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e671fc1-c9ac-460c-a8eb-e594b9c93add,},Annotations:map[string]string{io.kubernetes.container.hash: dd5b421f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71506230c089ff5ef047b42344ceb6f16bf71cfb8d345197f9c497d1582962b6,PodSandboxId:7b65053ac0a36b7c116c4b229ee149fb7de59a02c273e0b76699486039a45f75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714352114723954734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 2e11ec4a-41b7-410b-a315-e0ad8d33bd41,},Annotations:map[string]string{io.kubernetes.container.hash: f823047d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1ecc05c5f11d2df00671e9b1de4efd1b179a922b769f2779bcc040d8c321ee,PodSandboxId:764c4604dee732d92e516b51e6cf615a36c2a043cfe9cb3373ac2a0847aba370,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714352110961792140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219055,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: a8abd3cb1d8d8f1500b5e2b175363e8b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef84e8db2398b3fe9aa11e2b4c0ac80769f2959ad09a3352f3d8568c9303c2,PodSandboxId:039874a9316410cb97e1d4f8a2df90a30dfd141f144a8ded10b1c082fb23692d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714352110946237345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219055,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa668b2927ed2bad5b3bb87f0b39e289,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e685ea24234fd096d41d2bd6ca07d0ea778b9a991b8c8f966010a73a235fdc,PodSandboxId:127bb0e95e76a3cd90464d54ecabba6fbd2093dc057146cf45b04f4ab9c12c90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714352110957966892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219055,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2166d02f044cc1a890a08a588fd3dedb,},Annotations:map[string]string{io.kubernetes.container.hash: 322f7f41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ddaab0e4c279688d01e6ad3d7c723427648511ab8124b2651b13384b7e3991,PodSandboxId:2a18ccf84b54080797605f36078d416ccf486a4bbb485427caa4a012401fa025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714352108269953952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gsr5j,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 7319db68-b41b-4cab-843b-8b58b19b6f33,},Annotations:map[string]string{io.kubernetes.container.hash: d523e5a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f2dbf9657c3490c219550429abca6b7a214af64d31ee7b16887e7a60cb5e9c,PodSandboxId:bdea68bf05d359fd2ef59e8d7f53f4c4aa74178d58ea2e4a49c54c836e004249,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUN
NING,CreatedAt:1714352108220212792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b7156bd54cdeebeb5ee0d0580d8b84,},Annotations:map[string]string{io.kubernetes.container.hash: a32ce140,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8abe8996e5ee2db2787ff4c68eb174f393ec8cac2233b906a226dc8d036e188e,PodSandboxId:8b433813ca7cb1d0bd78ce2a70da18b52432e61ea2f8d01ea3f59c545f67a6eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:17143520862473
75360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e79833,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0ccd21e2c6825e5710ad6f34d146541163fa6b3c9c8122a032cde61915e824,PodSandboxId:2a18ccf84b54080797605f36078d416ccf486a4bbb485427caa4a012401fa025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714352087287034285,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gsr5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7319db68-b41b-4cab-843b-8b58b19b6f33,},Annotations:map[string]string{io.kubernetes.container.hash: d523e5a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29de8c2cb3fb86681ad4380aedef5462464b0d19ec332ea3a43e6510141b51,PodSandboxId:7b65053ac0a36b7c116c4b229ee149fb7de59a02c273e0b76699486039a45f75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714352086289884966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e11ec4a-41b7-410b-a315-e0ad8d33bd41,},Annotations:map[string]string{io.kubernetes.container.hash: f823047d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ce2172404199076059d353f2746967be081e7f20ca01371e0bf0d28e27b68e,PodSandboxId:d3570cc8acfe53640296afbb27f35a665d7da31336578f978e6c8c09cffaad54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714352087148844989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njsmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e671fc1-c9ac-460c-a8eb-e594b9c93add,},Annotations:map[string]string{io.kubernetes.container.hash: dd5b421f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d46b0d1d1bd9ad5aa671ece7c263a745dff8e1f866b191e11bfe547a5ecd53,PodSandboxId:039874a9316410cb97e1d4f8a2df90a30dfd141f144a8ded10b1c082fb23
692d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714352086321423814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa668b2927ed2bad5b3bb87f0b39e289,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7571d7724aa7d4cc10c10fcae27d84da174f49f61051899142ef85fa5ef4a406,PodSandboxId:764c4604dee732d92e516b51e6cf615a36c2a04
3cfe9cb3373ac2a0847aba370,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714352086296447048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8abd3cb1d8d8f1500b5e2b175363e8b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797d5a909c03edbbef586503df1e89348700f8dcec7b065738aee52003fa6e6e,PodSandboxId:bdea68bf05d359fd2ef59e8d7f53f4c4aa74178d58ea2
e4a49c54c836e004249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714352086238636078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b7156bd54cdeebeb5ee0d0580d8b84,},Annotations:map[string]string{io.kubernetes.container.hash: a32ce140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e959c8357ff6d4dcf1c18826680737dacae0618dd481785ca0917c1960c060,PodSandboxId:127bb0e95e76a3cd90464d54ecabba6fbd2093dc057146cf45b04f4ab9c12c90,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714352086204389970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2166d02f044cc1a890a08a588fd3dedb,},Annotations:map[string]string{io.kubernetes.container.hash: 322f7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1217b8a9c1f25595749037ff242f1b22e7a26b445b96ba7bbd529d58496ce,PodSandboxId:6c1372694d04948f37c1005e1d58fdd6455d2fb6f7e0590a41c6052147415dcf,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714352055486933538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e79833,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e85cfd84-2e13-4956-9836-14a69b647bc7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.692630649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f80b5455-efe9-4f1f-8aed-d14bc48a91b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.692680846Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f80b5455-efe9-4f1f-8aed-d14bc48a91b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.692984295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f96be968381f40ef4d173a0c7c64eaf4fc0d989f78dc444984340f6bc6366d0,PodSandboxId:d3570cc8acfe53640296afbb27f35a665d7da31336578f978e6c8c09cffaad54,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714352114746839785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njsmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e671fc1-c9ac-460c-a8eb-e594b9c93add,},Annotations:map[string]string{io.kubernetes.container.hash: dd5b421f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71506230c089ff5ef047b42344ceb6f16bf71cfb8d345197f9c497d1582962b6,PodSandboxId:7b65053ac0a36b7c116c4b229ee149fb7de59a02c273e0b76699486039a45f75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714352114723954734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 2e11ec4a-41b7-410b-a315-e0ad8d33bd41,},Annotations:map[string]string{io.kubernetes.container.hash: f823047d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1ecc05c5f11d2df00671e9b1de4efd1b179a922b769f2779bcc040d8c321ee,PodSandboxId:764c4604dee732d92e516b51e6cf615a36c2a043cfe9cb3373ac2a0847aba370,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714352110961792140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219055,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: a8abd3cb1d8d8f1500b5e2b175363e8b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef84e8db2398b3fe9aa11e2b4c0ac80769f2959ad09a3352f3d8568c9303c2,PodSandboxId:039874a9316410cb97e1d4f8a2df90a30dfd141f144a8ded10b1c082fb23692d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714352110946237345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219055,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa668b2927ed2bad5b3bb87f0b39e289,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e685ea24234fd096d41d2bd6ca07d0ea778b9a991b8c8f966010a73a235fdc,PodSandboxId:127bb0e95e76a3cd90464d54ecabba6fbd2093dc057146cf45b04f4ab9c12c90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714352110957966892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219055,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2166d02f044cc1a890a08a588fd3dedb,},Annotations:map[string]string{io.kubernetes.container.hash: 322f7f41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ddaab0e4c279688d01e6ad3d7c723427648511ab8124b2651b13384b7e3991,PodSandboxId:2a18ccf84b54080797605f36078d416ccf486a4bbb485427caa4a012401fa025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714352108269953952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gsr5j,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 7319db68-b41b-4cab-843b-8b58b19b6f33,},Annotations:map[string]string{io.kubernetes.container.hash: d523e5a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f2dbf9657c3490c219550429abca6b7a214af64d31ee7b16887e7a60cb5e9c,PodSandboxId:bdea68bf05d359fd2ef59e8d7f53f4c4aa74178d58ea2e4a49c54c836e004249,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUN
NING,CreatedAt:1714352108220212792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b7156bd54cdeebeb5ee0d0580d8b84,},Annotations:map[string]string{io.kubernetes.container.hash: a32ce140,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8abe8996e5ee2db2787ff4c68eb174f393ec8cac2233b906a226dc8d036e188e,PodSandboxId:8b433813ca7cb1d0bd78ce2a70da18b52432e61ea2f8d01ea3f59c545f67a6eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:17143520862473
75360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e79833,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0ccd21e2c6825e5710ad6f34d146541163fa6b3c9c8122a032cde61915e824,PodSandboxId:2a18ccf84b54080797605f36078d416ccf486a4bbb485427caa4a012401fa025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714352087287034285,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gsr5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7319db68-b41b-4cab-843b-8b58b19b6f33,},Annotations:map[string]string{io.kubernetes.container.hash: d523e5a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29de8c2cb3fb86681ad4380aedef5462464b0d19ec332ea3a43e6510141b51,PodSandboxId:7b65053ac0a36b7c116c4b229ee149fb7de59a02c273e0b76699486039a45f75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714352086289884966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e11ec4a-41b7-410b-a315-e0ad8d33bd41,},Annotations:map[string]string{io.kubernetes.container.hash: f823047d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ce2172404199076059d353f2746967be081e7f20ca01371e0bf0d28e27b68e,PodSandboxId:d3570cc8acfe53640296afbb27f35a665d7da31336578f978e6c8c09cffaad54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714352087148844989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njsmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e671fc1-c9ac-460c-a8eb-e594b9c93add,},Annotations:map[string]string{io.kubernetes.container.hash: dd5b421f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d46b0d1d1bd9ad5aa671ece7c263a745dff8e1f866b191e11bfe547a5ecd53,PodSandboxId:039874a9316410cb97e1d4f8a2df90a30dfd141f144a8ded10b1c082fb23
692d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714352086321423814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa668b2927ed2bad5b3bb87f0b39e289,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7571d7724aa7d4cc10c10fcae27d84da174f49f61051899142ef85fa5ef4a406,PodSandboxId:764c4604dee732d92e516b51e6cf615a36c2a04
3cfe9cb3373ac2a0847aba370,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714352086296447048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8abd3cb1d8d8f1500b5e2b175363e8b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797d5a909c03edbbef586503df1e89348700f8dcec7b065738aee52003fa6e6e,PodSandboxId:bdea68bf05d359fd2ef59e8d7f53f4c4aa74178d58ea2
e4a49c54c836e004249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714352086238636078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b7156bd54cdeebeb5ee0d0580d8b84,},Annotations:map[string]string{io.kubernetes.container.hash: a32ce140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e959c8357ff6d4dcf1c18826680737dacae0618dd481785ca0917c1960c060,PodSandboxId:127bb0e95e76a3cd90464d54ecabba6fbd2093dc057146cf45b04f4ab9c12c90,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714352086204389970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2166d02f044cc1a890a08a588fd3dedb,},Annotations:map[string]string{io.kubernetes.container.hash: 322f7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1217b8a9c1f25595749037ff242f1b22e7a26b445b96ba7bbd529d58496ce,PodSandboxId:6c1372694d04948f37c1005e1d58fdd6455d2fb6f7e0590a41c6052147415dcf,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714352055486933538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e79833,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f80b5455-efe9-4f1f-8aed-d14bc48a91b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.740401227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5511c91-4226-4d46-bf49-2170248cb416 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.740896919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5511c91-4226-4d46-bf49-2170248cb416 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.743369592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81bdab66-da41-4516-a8df-4cae9f56555e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.743775935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714352117743755966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81bdab66-da41-4516-a8df-4cae9f56555e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.744618106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4227dc04-ea55-4028-a90d-10743d401e58 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.744668818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4227dc04-ea55-4028-a90d-10743d401e58 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:55:17 kubernetes-upgrade-219055 crio[2295]: time="2024-04-29 00:55:17.745152600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f96be968381f40ef4d173a0c7c64eaf4fc0d989f78dc444984340f6bc6366d0,PodSandboxId:d3570cc8acfe53640296afbb27f35a665d7da31336578f978e6c8c09cffaad54,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714352114746839785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njsmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e671fc1-c9ac-460c-a8eb-e594b9c93add,},Annotations:map[string]string{io.kubernetes.container.hash: dd5b421f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71506230c089ff5ef047b42344ceb6f16bf71cfb8d345197f9c497d1582962b6,PodSandboxId:7b65053ac0a36b7c116c4b229ee149fb7de59a02c273e0b76699486039a45f75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714352114723954734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 2e11ec4a-41b7-410b-a315-e0ad8d33bd41,},Annotations:map[string]string{io.kubernetes.container.hash: f823047d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1ecc05c5f11d2df00671e9b1de4efd1b179a922b769f2779bcc040d8c321ee,PodSandboxId:764c4604dee732d92e516b51e6cf615a36c2a043cfe9cb3373ac2a0847aba370,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714352110961792140,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219055,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: a8abd3cb1d8d8f1500b5e2b175363e8b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef84e8db2398b3fe9aa11e2b4c0ac80769f2959ad09a3352f3d8568c9303c2,PodSandboxId:039874a9316410cb97e1d4f8a2df90a30dfd141f144a8ded10b1c082fb23692d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714352110946237345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219055,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa668b2927ed2bad5b3bb87f0b39e289,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e685ea24234fd096d41d2bd6ca07d0ea778b9a991b8c8f966010a73a235fdc,PodSandboxId:127bb0e95e76a3cd90464d54ecabba6fbd2093dc057146cf45b04f4ab9c12c90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714352110957966892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219055,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2166d02f044cc1a890a08a588fd3dedb,},Annotations:map[string]string{io.kubernetes.container.hash: 322f7f41,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ddaab0e4c279688d01e6ad3d7c723427648511ab8124b2651b13384b7e3991,PodSandboxId:2a18ccf84b54080797605f36078d416ccf486a4bbb485427caa4a012401fa025,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714352108269953952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gsr5j,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 7319db68-b41b-4cab-843b-8b58b19b6f33,},Annotations:map[string]string{io.kubernetes.container.hash: d523e5a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f2dbf9657c3490c219550429abca6b7a214af64d31ee7b16887e7a60cb5e9c,PodSandboxId:bdea68bf05d359fd2ef59e8d7f53f4c4aa74178d58ea2e4a49c54c836e004249,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUN
NING,CreatedAt:1714352108220212792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b7156bd54cdeebeb5ee0d0580d8b84,},Annotations:map[string]string{io.kubernetes.container.hash: a32ce140,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8abe8996e5ee2db2787ff4c68eb174f393ec8cac2233b906a226dc8d036e188e,PodSandboxId:8b433813ca7cb1d0bd78ce2a70da18b52432e61ea2f8d01ea3f59c545f67a6eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:17143520862473
75360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e79833,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba0ccd21e2c6825e5710ad6f34d146541163fa6b3c9c8122a032cde61915e824,PodSandboxId:2a18ccf84b54080797605f36078d416ccf486a4bbb485427caa4a012401fa025,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714352087287034285,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gsr5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7319db68-b41b-4cab-843b-8b58b19b6f33,},Annotations:map[string]string{io.kubernetes.container.hash: d523e5a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29de8c2cb3fb86681ad4380aedef5462464b0d19ec332ea3a43e6510141b51,PodSandboxId:7b65053ac0a36b7c116c4b229ee149fb7de59a02c273e0b76699486039a45f75,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714352086289884966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e11ec4a-41b7-410b-a315-e0ad8d33bd41,},Annotations:map[string]string{io.kubernetes.container.hash: f823047d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ce2172404199076059d353f2746967be081e7f20ca01371e0bf0d28e27b68e,PodSandboxId:d3570cc8acfe53640296afbb27f35a665d7da31336578f978e6c8c09cffaad54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714352087148844989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njsmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e671fc1-c9ac-460c-a8eb-e594b9c93add,},Annotations:map[string]string{io.kubernetes.container.hash: dd5b421f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d46b0d1d1bd9ad5aa671ece7c263a745dff8e1f866b191e11bfe547a5ecd53,PodSandboxId:039874a9316410cb97e1d4f8a2df90a30dfd141f144a8ded10b1c082fb23
692d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714352086321423814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa668b2927ed2bad5b3bb87f0b39e289,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7571d7724aa7d4cc10c10fcae27d84da174f49f61051899142ef85fa5ef4a406,PodSandboxId:764c4604dee732d92e516b51e6cf615a36c2a04
3cfe9cb3373ac2a0847aba370,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714352086296447048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8abd3cb1d8d8f1500b5e2b175363e8b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797d5a909c03edbbef586503df1e89348700f8dcec7b065738aee52003fa6e6e,PodSandboxId:bdea68bf05d359fd2ef59e8d7f53f4c4aa74178d58ea2
e4a49c54c836e004249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714352086238636078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b7156bd54cdeebeb5ee0d0580d8b84,},Annotations:map[string]string{io.kubernetes.container.hash: a32ce140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e959c8357ff6d4dcf1c18826680737dacae0618dd481785ca0917c1960c060,PodSandboxId:127bb0e95e76a3cd90464d54ecabba6fbd2093dc057146cf45b04f4ab9c12c90,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714352086204389970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2166d02f044cc1a890a08a588fd3dedb,},Annotations:map[string]string{io.kubernetes.container.hash: 322f7f41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1217b8a9c1f25595749037ff242f1b22e7a26b445b96ba7bbd529d58496ce,PodSandboxId:6c1372694d04948f37c1005e1d58fdd6455d2fb6f7e0590a41c6052147415dcf,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714352055486933538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e79833,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4227dc04-ea55-4028-a90d-10743d401e58 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9f96be968381f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   2                   d3570cc8acfe5       coredns-7db6d8ff4d-njsmk
	71506230c089f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   7b65053ac0a36       storage-provisioner
	bb1ecc05c5f11       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   6 seconds ago        Running             kube-scheduler            2                   764c4604dee73       kube-scheduler-kubernetes-upgrade-219055
	f3e685ea24234       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   6 seconds ago        Running             kube-apiserver            2                   127bb0e95e76a       kube-apiserver-kubernetes-upgrade-219055
	58ef84e8db239       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   6 seconds ago        Running             kube-controller-manager   2                   039874a931641       kube-controller-manager-kubernetes-upgrade-219055
	c1ddaab0e4c27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 seconds ago        Running             coredns                   2                   2a18ccf84b540       coredns-7db6d8ff4d-gsr5j
	26f2dbf9657c3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 seconds ago        Running             etcd                      2                   bdea68bf05d35       etcd-kubernetes-upgrade-219055
	ba0ccd21e2c68       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   30 seconds ago       Exited              coredns                   1                   2a18ccf84b540       coredns-7db6d8ff4d-gsr5j
	16ce217240419       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   30 seconds ago       Exited              coredns                   1                   d3570cc8acfe5       coredns-7db6d8ff4d-njsmk
	31d46b0d1d1bd       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   31 seconds ago       Exited              kube-controller-manager   1                   039874a931641       kube-controller-manager-kubernetes-upgrade-219055
	7571d7724aa7d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   31 seconds ago       Exited              kube-scheduler            1                   764c4604dee73       kube-scheduler-kubernetes-upgrade-219055
	8f29de8c2cb3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   31 seconds ago       Exited              storage-provisioner       1                   7b65053ac0a36       storage-provisioner
	8abe8996e5ee2       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   31 seconds ago       Running             kube-proxy                1                   8b433813ca7cb       kube-proxy-xfs78
	797d5a909c03e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   31 seconds ago       Exited              etcd                      1                   bdea68bf05d35       etcd-kubernetes-upgrade-219055
	70e959c8357ff       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   31 seconds ago       Exited              kube-apiserver            1                   127bb0e95e76a       kube-apiserver-kubernetes-upgrade-219055
	51e1217b8a9c1       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   About a minute ago   Exited              kube-proxy                0                   6c1372694d049       kube-proxy-xfs78
	
	
	==> coredns [16ce2172404199076059d353f2746967be081e7f20ca01371e0bf0d28e27b68e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9f96be968381f40ef4d173a0c7c64eaf4fc0d989f78dc444984340f6bc6366d0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ba0ccd21e2c6825e5710ad6f34d146541163fa6b3c9c8122a032cde61915e824] <==
	
	
	==> coredns [c1ddaab0e4c279688d01e6ad3d7c723427648511ab8124b2651b13384b7e3991] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-219055
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-219055
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:53:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-219055
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:55:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:55:13 +0000   Mon, 29 Apr 2024 00:53:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:55:13 +0000   Mon, 29 Apr 2024 00:53:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:55:13 +0000   Mon, 29 Apr 2024 00:53:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:55:13 +0000   Mon, 29 Apr 2024 00:54:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.69
	  Hostname:    kubernetes-upgrade-219055
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 968d28e078e5486d9722c59b8ef2bd02
	  System UUID:                968d28e0-78e5-486d-9722-c59b8ef2bd02
	  Boot ID:                    c3afd21e-6c1b-46ff-8581-f8fdd19d5010
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-gsr5j                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 coredns-7db6d8ff4d-njsmk                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 etcd-kubernetes-upgrade-219055                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-kubernetes-upgrade-219055             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-219055    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-xfs78                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-kubernetes-upgrade-219055             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node kubernetes-upgrade-219055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node kubernetes-upgrade-219055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node kubernetes-upgrade-219055 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           65s                node-controller  Node kubernetes-upgrade-219055 event: Registered Node kubernetes-upgrade-219055 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-219055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-219055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-219055 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.282560] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.065075] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075527] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.210919] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.164451] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.349450] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +5.777539] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +0.066027] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.898124] systemd-fstab-generator[868]: Ignoring "noauto" option for root device
	[Apr29 00:54] systemd-fstab-generator[1265]: Ignoring "noauto" option for root device
	[  +0.078967] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.037521] kauditd_printk_skb: 21 callbacks suppressed
	[ +28.434710] systemd-fstab-generator[2215]: Ignoring "noauto" option for root device
	[  +0.095133] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.086003] systemd-fstab-generator[2227]: Ignoring "noauto" option for root device
	[  +0.186286] systemd-fstab-generator[2241]: Ignoring "noauto" option for root device
	[  +0.146450] systemd-fstab-generator[2253]: Ignoring "noauto" option for root device
	[  +0.335551] systemd-fstab-generator[2281]: Ignoring "noauto" option for root device
	[  +0.876441] systemd-fstab-generator[2437]: Ignoring "noauto" option for root device
	[  +3.647343] kauditd_printk_skb: 228 callbacks suppressed
	[Apr29 00:55] systemd-fstab-generator[3625]: Ignoring "noauto" option for root device
	[  +4.668008] kauditd_printk_skb: 45 callbacks suppressed
	[  +0.770207] systemd-fstab-generator[3972]: Ignoring "noauto" option for root device
	
	
	==> etcd [26f2dbf9657c3490c219550429abca6b7a214af64d31ee7b16887e7a60cb5e9c] <==
	{"level":"info","ts":"2024-04-29T00:55:08.540477Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:55:08.540488Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:55:08.540713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd switched to configuration voters=(14682526388968476125)"}
	{"level":"info","ts":"2024-04-29T00:55:08.540788Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d8b09062677f53c2","local-member-id":"cbc2cff59bc3cddd","added-peer-id":"cbc2cff59bc3cddd","added-peer-peer-urls":["https://192.168.50.69:2380"]}
	{"level":"info","ts":"2024-04-29T00:55:08.540916Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d8b09062677f53c2","local-member-id":"cbc2cff59bc3cddd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:55:08.540966Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:55:08.546075Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T00:55:08.548495Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"cbc2cff59bc3cddd","initial-advertise-peer-urls":["https://192.168.50.69:2380"],"listen-peer-urls":["https://192.168.50.69:2380"],"advertise-client-urls":["https://192.168.50.69:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.69:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T00:55:08.548571Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T00:55:08.548666Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.69:2380"}
	{"level":"info","ts":"2024-04-29T00:55:08.548674Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.69:2380"}
	{"level":"info","ts":"2024-04-29T00:55:09.625098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-29T00:55:09.625146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-29T00:55:09.625184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd received MsgPreVoteResp from cbc2cff59bc3cddd at term 3"}
	{"level":"info","ts":"2024-04-29T00:55:09.625201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd became candidate at term 4"}
	{"level":"info","ts":"2024-04-29T00:55:09.625209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd received MsgVoteResp from cbc2cff59bc3cddd at term 4"}
	{"level":"info","ts":"2024-04-29T00:55:09.625226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd became leader at term 4"}
	{"level":"info","ts":"2024-04-29T00:55:09.625235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cbc2cff59bc3cddd elected leader cbc2cff59bc3cddd at term 4"}
	{"level":"info","ts":"2024-04-29T00:55:09.631834Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:55:09.632186Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:55:09.632206Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T00:55:09.631836Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"cbc2cff59bc3cddd","local-member-attributes":"{Name:kubernetes-upgrade-219055 ClientURLs:[https://192.168.50.69:2379]}","request-path":"/0/members/cbc2cff59bc3cddd/attributes","cluster-id":"d8b09062677f53c2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T00:55:09.631867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:55:09.634963Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T00:55:09.636503Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.69:2379"}
	
	
	==> etcd [797d5a909c03edbbef586503df1e89348700f8dcec7b065738aee52003fa6e6e] <==
	{"level":"info","ts":"2024-04-29T00:54:47.041719Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T00:54:47.143633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T00:54:47.150625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T00:54:47.151437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd received MsgPreVoteResp from cbc2cff59bc3cddd at term 2"}
	{"level":"info","ts":"2024-04-29T00:54:47.153876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T00:54:47.153954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd received MsgVoteResp from cbc2cff59bc3cddd at term 3"}
	{"level":"info","ts":"2024-04-29T00:54:47.153965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbc2cff59bc3cddd became leader at term 3"}
	{"level":"info","ts":"2024-04-29T00:54:47.153973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cbc2cff59bc3cddd elected leader cbc2cff59bc3cddd at term 3"}
	{"level":"info","ts":"2024-04-29T00:54:47.166226Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"cbc2cff59bc3cddd","local-member-attributes":"{Name:kubernetes-upgrade-219055 ClientURLs:[https://192.168.50.69:2379]}","request-path":"/0/members/cbc2cff59bc3cddd/attributes","cluster-id":"d8b09062677f53c2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T00:54:47.16647Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:54:47.166763Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:54:47.166906Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:54:47.166917Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T00:54:47.174353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.69:2379"}
	{"level":"info","ts":"2024-04-29T00:54:47.185158Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T00:54:58.108929Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-29T00:54:58.109041Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"kubernetes-upgrade-219055","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.69:2380"],"advertise-client-urls":["https://192.168.50.69:2379"]}
	{"level":"warn","ts":"2024-04-29T00:54:58.109126Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:54:58.109233Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:54:58.124316Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.69:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:54:58.124395Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.69:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T00:54:58.125689Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cbc2cff59bc3cddd","current-leader-member-id":"cbc2cff59bc3cddd"}
	{"level":"info","ts":"2024-04-29T00:54:58.129656Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.69:2380"}
	{"level":"info","ts":"2024-04-29T00:54:58.129772Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.69:2380"}
	{"level":"info","ts":"2024-04-29T00:54:58.129801Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"kubernetes-upgrade-219055","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.69:2380"],"advertise-client-urls":["https://192.168.50.69:2379"]}
	
	
	==> kernel <==
	 00:55:18 up 1 min,  0 users,  load average: 1.25, 0.48, 0.18
	Linux kubernetes-upgrade-219055 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [70e959c8357ff6d4dcf1c18826680737dacae0618dd481785ca0917c1960c060] <==
	W0429 00:55:07.605220       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:07.608035       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:07.754145       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:07.786946       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:07.789656       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:07.795593       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:07.812928       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:07.835383       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:07.845181       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:07.868102       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:07.973414       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:07.973811       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:08.089506       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:08.096507       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:08.096530       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:08.099033       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 00:55:08.109528       1 logging.go:59] [core] [Channel #2 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 00:55:08.183804       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0429 00:55:08.184052       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 00:55:08.185470       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 00:55:08.185534       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0429 00:55:08.186772       1 trace.go:236] Trace[1695078653]: "Get" accept:application/json, */*,audit-id:16a9d2f7-2dbc-4c9d-9d3c-f3da50027426,client:192.168.50.69,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Apr-2024 00:54:58.184) (total time: 10002ms):
	Trace[1695078653]: [10.002571595s] [10.002571595s] END
	E0429 00:55:08.187221       1 timeout.go:142] post-timeout activity - time-elapsed: 3.148105ms, GET "/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" result: <nil>
	W0429 00:55:08.269409       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f3e685ea24234fd096d41d2bd6ca07d0ea778b9a991b8c8f966010a73a235fdc] <==
	I0429 00:55:13.462179       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0429 00:55:13.507346       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 00:55:13.511084       1 aggregator.go:165] initial CRD sync complete...
	I0429 00:55:13.511179       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 00:55:13.511205       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 00:55:13.512240       1 cache.go:39] Caches are synced for autoregister controller
	I0429 00:55:13.527824       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0429 00:55:13.553509       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 00:55:13.602629       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 00:55:13.602720       1 policy_source.go:224] refreshing policies
	I0429 00:55:13.606062       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 00:55:13.606570       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 00:55:13.606613       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 00:55:13.607062       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 00:55:13.607501       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 00:55:13.608381       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 00:55:13.612530       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 00:55:13.641133       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 00:55:14.424195       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 00:55:15.228511       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 00:55:15.239468       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:55:15.273057       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:55:15.411348       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 00:55:15.421685       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 00:55:16.479764       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [31d46b0d1d1bd9ad5aa671ece7c263a745dff8e1f866b191e11bfe547a5ecd53] <==
	I0429 00:54:52.595299       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0429 00:54:52.595556       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0429 00:54:52.644749       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0429 00:54:52.644869       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0429 00:54:52.644847       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0429 00:54:52.645215       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0429 00:54:52.696208       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0429 00:54:52.696474       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0429 00:54:52.696530       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0429 00:54:52.745058       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0429 00:54:52.745191       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0429 00:54:52.745538       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0429 00:54:52.794670       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0429 00:54:52.794862       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0429 00:54:52.794903       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0429 00:54:52.844651       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0429 00:54:52.844865       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0429 00:54:52.844909       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0429 00:54:52.894549       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0429 00:54:52.894740       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0429 00:54:52.895031       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0429 00:54:52.948779       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0429 00:54:52.948920       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0429 00:54:52.948952       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0429 00:54:52.948991       1 shared_informer.go:320] Caches are synced for token_cleaner
	
	
	==> kube-controller-manager [58ef84e8db2398b3fe9aa11e2b4c0ac80769f2959ad09a3352f3d8568c9303c2] <==
	I0429 00:55:15.621840       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0429 00:55:15.622007       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0429 00:55:15.622041       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0429 00:55:15.625805       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0429 00:55:15.626019       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0429 00:55:15.626111       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0429 00:55:15.629634       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0429 00:55:15.629859       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0429 00:55:15.631389       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0429 00:55:15.635057       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0429 00:55:15.635362       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0429 00:55:15.635501       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0429 00:55:15.638636       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0429 00:55:15.638864       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0429 00:55:15.639976       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0429 00:55:15.642930       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0429 00:55:15.643190       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0429 00:55:15.645155       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0429 00:55:15.645303       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0429 00:55:15.651483       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0429 00:55:15.651804       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0429 00:55:15.652219       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0429 00:55:15.655368       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0429 00:55:15.655692       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0429 00:55:15.656399       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	
	
	==> kube-proxy [51e1217b8a9c1f25595749037ff242f1b22e7a26b445b96ba7bbd529d58496ce] <==
	I0429 00:54:15.852805       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:54:15.863158       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.69"]
	I0429 00:54:16.045492       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:54:16.045610       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:54:16.045633       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:54:16.055794       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:54:16.056152       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:54:16.056497       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:54:16.058059       1 config.go:192] "Starting service config controller"
	I0429 00:54:16.058073       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:54:16.058095       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:54:16.058099       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:54:16.059132       1 config.go:319] "Starting node config controller"
	I0429 00:54:16.059141       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:54:16.159425       1 shared_informer.go:320] Caches are synced for node config
	I0429 00:54:16.159471       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:54:16.159502       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [8abe8996e5ee2db2787ff4c68eb174f393ec8cac2233b906a226dc8d036e188e] <==
	I0429 00:54:48.884591       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:54:49.918873       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.69"]
	I0429 00:54:50.248058       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:54:50.248125       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:54:50.248144       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:54:50.259059       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:54:50.259494       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:54:50.259603       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:54:50.261222       1 config.go:192] "Starting service config controller"
	I0429 00:54:50.261357       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:54:50.261395       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:54:50.261401       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:54:50.262897       1 config.go:319] "Starting node config controller"
	I0429 00:54:50.262942       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:54:50.362496       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 00:54:50.362574       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:54:50.363166       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7571d7724aa7d4cc10c10fcae27d84da174f49f61051899142ef85fa5ef4a406] <==
	I0429 00:54:47.887642       1 serving.go:380] Generated self-signed cert in-memory
	W0429 00:54:49.899591       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 00:54:49.899643       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 00:54:49.899654       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 00:54:49.899660       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 00:54:49.940026       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 00:54:49.940774       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:54:49.947986       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 00:54:49.948074       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 00:54:49.948907       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 00:54:49.948976       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 00:54:50.048518       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 00:54:57.967525       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0429 00:54:57.967713       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0429 00:54:57.967888       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0429 00:54:57.968525       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bb1ecc05c5f11d2df00671e9b1de4efd1b179a922b769f2779bcc040d8c321ee] <==
	I0429 00:55:11.940645       1 serving.go:380] Generated self-signed cert in-memory
	W0429 00:55:13.473831       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 00:55:13.473985       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 00:55:13.474178       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 00:55:13.474393       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 00:55:13.545822       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 00:55:13.546409       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:55:13.550864       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 00:55:13.550927       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 00:55:13.551736       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 00:55:13.553931       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 00:55:13.651528       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 00:55:10 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:10.727696    3632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/e9b7156bd54cdeebeb5ee0d0580d8b84-etcd-certs\") pod \"etcd-kubernetes-upgrade-219055\" (UID: \"e9b7156bd54cdeebeb5ee0d0580d8b84\") " pod="kube-system/etcd-kubernetes-upgrade-219055"
	Apr 29 00:55:10 kubernetes-upgrade-219055 kubelet[3632]: E0429 00:55:10.728091    3632 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.69:8443: connect: connection refused" node="kubernetes-upgrade-219055"
	Apr 29 00:55:10 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:10.925645    3632 scope.go:117] "RemoveContainer" containerID="70e959c8357ff6d4dcf1c18826680737dacae0618dd481785ca0917c1960c060"
	Apr 29 00:55:10 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:10.926666    3632 scope.go:117] "RemoveContainer" containerID="31d46b0d1d1bd9ad5aa671ece7c263a745dff8e1f866b191e11bfe547a5ecd53"
	Apr 29 00:55:10 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:10.928719    3632 scope.go:117] "RemoveContainer" containerID="7571d7724aa7d4cc10c10fcae27d84da174f49f61051899142ef85fa5ef4a406"
	Apr 29 00:55:11 kubernetes-upgrade-219055 kubelet[3632]: E0429 00:55:11.030802    3632 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-219055?timeout=10s\": dial tcp 192.168.50.69:8443: connect: connection refused" interval="800ms"
	Apr 29 00:55:11 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:11.130541    3632 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-219055"
	Apr 29 00:55:11 kubernetes-upgrade-219055 kubelet[3632]: E0429 00:55:11.131517    3632 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.69:8443: connect: connection refused" node="kubernetes-upgrade-219055"
	Apr 29 00:55:11 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:11.934421    3632 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-219055"
	Apr 29 00:55:13 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:13.643608    3632 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-219055"
	Apr 29 00:55:13 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:13.644076    3632 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-219055"
	Apr 29 00:55:13 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:13.646374    3632 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 00:55:13 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:13.647231    3632 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 29 00:55:13 kubernetes-upgrade-219055 kubelet[3632]: E0429 00:55:13.681341    3632 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-219055\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-219055"
	Apr 29 00:55:14 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:14.391217    3632 apiserver.go:52] "Watching apiserver"
	Apr 29 00:55:14 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:14.394788    3632 topology_manager.go:215] "Topology Admit Handler" podUID="2e11ec4a-41b7-410b-a315-e0ad8d33bd41" podNamespace="kube-system" podName="storage-provisioner"
	Apr 29 00:55:14 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:14.394946    3632 topology_manager.go:215] "Topology Admit Handler" podUID="5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1" podNamespace="kube-system" podName="kube-proxy-xfs78"
	Apr 29 00:55:14 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:14.394993    3632 topology_manager.go:215] "Topology Admit Handler" podUID="7319db68-b41b-4cab-843b-8b58b19b6f33" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gsr5j"
	Apr 29 00:55:14 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:14.395031    3632 topology_manager.go:215] "Topology Admit Handler" podUID="6e671fc1-c9ac-460c-a8eb-e594b9c93add" podNamespace="kube-system" podName="coredns-7db6d8ff4d-njsmk"
	Apr 29 00:55:14 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:14.417947    3632 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 00:55:14 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:14.449728    3632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2e11ec4a-41b7-410b-a315-e0ad8d33bd41-tmp\") pod \"storage-provisioner\" (UID: \"2e11ec4a-41b7-410b-a315-e0ad8d33bd41\") " pod="kube-system/storage-provisioner"
	Apr 29 00:55:14 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:14.450845    3632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1-lib-modules\") pod \"kube-proxy-xfs78\" (UID: \"5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1\") " pod="kube-system/kube-proxy-xfs78"
	Apr 29 00:55:14 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:14.451088    3632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1-xtables-lock\") pod \"kube-proxy-xfs78\" (UID: \"5d0eba1e-7a46-4bc5-a51f-b8d4d3caa8c1\") " pod="kube-system/kube-proxy-xfs78"
	Apr 29 00:55:14 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:14.696970    3632 scope.go:117] "RemoveContainer" containerID="16ce2172404199076059d353f2746967be081e7f20ca01371e0bf0d28e27b68e"
	Apr 29 00:55:14 kubernetes-upgrade-219055 kubelet[3632]: I0429 00:55:14.697539    3632 scope.go:117] "RemoveContainer" containerID="8f29de8c2cb3fb86681ad4380aedef5462464b0d19ec332ea3a43e6510141b51"
	
	
	==> storage-provisioner [71506230c089ff5ef047b42344ceb6f16bf71cfb8d345197f9c497d1582962b6] <==
	I0429 00:55:14.883985       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 00:55:14.900194       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 00:55:14.900226       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [8f29de8c2cb3fb86681ad4380aedef5462464b0d19ec332ea3a43e6510141b51] <==
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 90 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00039a0d0, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00039a0c0)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0000763c0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003b0000, 0x18e5530, 0xc000463800, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000023d00)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000023d00, 0x18b3d60, 0xc0002642d0, 0x1, 0xc0000821e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000023d00, 0x3b9aca00, 0x0, 0x1, 0xc0000821e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000023d00, 0x3b9aca00, 0xc0000821e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 00:55:17.107995   69182 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17977-13393/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-219055 -n kubernetes-upgrade-219055
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-219055 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-219055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-219055
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-219055: (1.126687829s)
--- FAIL: TestKubernetesUpgrade (453.06s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (69.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-934652 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-934652 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.895312607s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-934652] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-934652" primary control-plane node in "pause-934652" cluster
	* Updating the running kvm2 "pause-934652" VM ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-934652" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:52:14.620962   66854 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:52:14.621164   66854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:52:14.621178   66854 out.go:304] Setting ErrFile to fd 2...
	I0429 00:52:14.621185   66854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:52:14.621510   66854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:52:14.622247   66854 out.go:298] Setting JSON to false
	I0429 00:52:14.623279   66854 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9279,"bootTime":1714342656,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 00:52:14.623355   66854 start.go:139] virtualization: kvm guest
	I0429 00:52:14.698144   66854 out.go:177] * [pause-934652] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 00:52:14.765799   66854 out.go:177]   - MINIKUBE_LOCATION=17977
	I0429 00:52:14.765749   66854 notify.go:220] Checking for updates...
	I0429 00:52:14.767823   66854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 00:52:14.769413   66854 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0429 00:52:14.770806   66854 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:52:14.772356   66854 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 00:52:14.773789   66854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 00:52:14.775789   66854 config.go:182] Loaded profile config "pause-934652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:52:14.776403   66854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:52:14.776455   66854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:52:14.797977   66854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0429 00:52:14.798475   66854 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:52:14.799174   66854 main.go:141] libmachine: Using API Version  1
	I0429 00:52:14.799199   66854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:52:14.799642   66854 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:52:14.799864   66854 main.go:141] libmachine: (pause-934652) Calling .DriverName
	I0429 00:52:14.800158   66854 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 00:52:14.800582   66854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:52:14.800625   66854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:52:14.815824   66854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0429 00:52:14.816261   66854 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:52:14.816763   66854 main.go:141] libmachine: Using API Version  1
	I0429 00:52:14.816789   66854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:52:14.817088   66854 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:52:14.817312   66854 main.go:141] libmachine: (pause-934652) Calling .DriverName
	I0429 00:52:14.853365   66854 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 00:52:14.854831   66854 start.go:297] selected driver: kvm2
	I0429 00:52:14.854853   66854 start.go:901] validating driver "kvm2" against &{Name:pause-934652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.0 ClusterName:pause-934652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:52:14.855045   66854 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 00:52:14.855486   66854 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:52:14.855573   66854 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 00:52:14.872057   66854 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 00:52:14.872805   66854 cni.go:84] Creating CNI manager for ""
	I0429 00:52:14.872821   66854 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 00:52:14.872875   66854 start.go:340] cluster config:
	{Name:pause-934652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-934652 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:52:14.872996   66854 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:52:14.876365   66854 out.go:177] * Starting "pause-934652" primary control-plane node in "pause-934652" cluster
	I0429 00:52:14.877852   66854 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:52:14.877911   66854 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 00:52:14.877925   66854 cache.go:56] Caching tarball of preloaded images
	I0429 00:52:14.878119   66854 preload.go:173] Found /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 00:52:14.878134   66854 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 00:52:14.878242   66854 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/pause-934652/config.json ...
	I0429 00:52:14.878459   66854 start.go:360] acquireMachinesLock for pause-934652: {Name:mkf607109012ed2f4aeb283e90eb20782d746716 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 00:52:32.507472   66854 start.go:364] duration metric: took 17.62896166s to acquireMachinesLock for "pause-934652"
	I0429 00:52:32.507541   66854 start.go:96] Skipping create...Using existing machine configuration
	I0429 00:52:32.507553   66854 fix.go:54] fixHost starting: 
	I0429 00:52:32.507986   66854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:52:32.508027   66854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:52:32.524865   66854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35539
	I0429 00:52:32.525255   66854 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:52:32.525792   66854 main.go:141] libmachine: Using API Version  1
	I0429 00:52:32.525817   66854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:52:32.526187   66854 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:52:32.526420   66854 main.go:141] libmachine: (pause-934652) Calling .DriverName
	I0429 00:52:32.526575   66854 main.go:141] libmachine: (pause-934652) Calling .GetState
	I0429 00:52:32.528437   66854 fix.go:112] recreateIfNeeded on pause-934652: state=Running err=<nil>
	W0429 00:52:32.528472   66854 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 00:52:32.530333   66854 out.go:177] * Updating the running kvm2 "pause-934652" VM ...
	I0429 00:52:32.531686   66854 machine.go:94] provisionDockerMachine start ...
	I0429 00:52:32.531714   66854 main.go:141] libmachine: (pause-934652) Calling .DriverName
	I0429 00:52:32.531925   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHHostname
	I0429 00:52:32.535318   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:32.535805   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:32.535844   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:32.536019   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHPort
	I0429 00:52:32.536187   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:32.536377   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:32.536534   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHUsername
	I0429 00:52:32.536698   66854 main.go:141] libmachine: Using SSH client type: native
	I0429 00:52:32.536886   66854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0429 00:52:32.536897   66854 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 00:52:32.659588   66854 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-934652
	
	I0429 00:52:32.659623   66854 main.go:141] libmachine: (pause-934652) Calling .GetMachineName
	I0429 00:52:32.659878   66854 buildroot.go:166] provisioning hostname "pause-934652"
	I0429 00:52:32.659905   66854 main.go:141] libmachine: (pause-934652) Calling .GetMachineName
	I0429 00:52:32.660050   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHHostname
	I0429 00:52:32.662534   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:32.662971   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:32.663001   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:32.663121   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHPort
	I0429 00:52:32.663308   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:32.663466   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:32.663621   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHUsername
	I0429 00:52:32.663805   66854 main.go:141] libmachine: Using SSH client type: native
	I0429 00:52:32.663994   66854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0429 00:52:32.664011   66854 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-934652 && echo "pause-934652" | sudo tee /etc/hostname
	I0429 00:52:32.804076   66854 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-934652
	
	I0429 00:52:32.804105   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHHostname
	I0429 00:52:32.807332   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:32.807772   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:32.807806   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:32.808006   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHPort
	I0429 00:52:32.808185   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:32.808358   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:32.808492   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHUsername
	I0429 00:52:32.808681   66854 main.go:141] libmachine: Using SSH client type: native
	I0429 00:52:32.808896   66854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0429 00:52:32.808913   66854 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-934652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-934652/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-934652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 00:52:32.936316   66854 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 00:52:32.936359   66854 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17977-13393/.minikube CaCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17977-13393/.minikube}
	I0429 00:52:32.936382   66854 buildroot.go:174] setting up certificates
	I0429 00:52:32.936392   66854 provision.go:84] configureAuth start
	I0429 00:52:32.936403   66854 main.go:141] libmachine: (pause-934652) Calling .GetMachineName
	I0429 00:52:32.936762   66854 main.go:141] libmachine: (pause-934652) Calling .GetIP
	I0429 00:52:32.939731   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:32.940150   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:32.940180   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:32.940384   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHHostname
	I0429 00:52:32.942883   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:32.943291   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:32.943322   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:32.943462   66854 provision.go:143] copyHostCerts
	I0429 00:52:32.943508   66854 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem, removing ...
	I0429 00:52:32.943517   66854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem
	I0429 00:52:32.943570   66854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/ca.pem (1082 bytes)
	I0429 00:52:32.943729   66854 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem, removing ...
	I0429 00:52:32.943743   66854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem
	I0429 00:52:32.943775   66854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/cert.pem (1123 bytes)
	I0429 00:52:32.943848   66854 exec_runner.go:144] found /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem, removing ...
	I0429 00:52:32.943856   66854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem
	I0429 00:52:32.943874   66854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17977-13393/.minikube/key.pem (1675 bytes)
	I0429 00:52:32.943922   66854 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem org=jenkins.pause-934652 san=[127.0.0.1 192.168.39.185 localhost minikube pause-934652]
	I0429 00:52:33.125066   66854 provision.go:177] copyRemoteCerts
	I0429 00:52:33.125119   66854 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 00:52:33.125139   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHHostname
	I0429 00:52:33.127943   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:33.128374   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:33.128421   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:33.128603   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHPort
	I0429 00:52:33.128773   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:33.128924   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHUsername
	I0429 00:52:33.129087   66854 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/pause-934652/id_rsa Username:docker}
	I0429 00:52:33.219890   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 00:52:33.259644   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0429 00:52:33.296648   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 00:52:33.328097   66854 provision.go:87] duration metric: took 391.694342ms to configureAuth
	I0429 00:52:33.328130   66854 buildroot.go:189] setting minikube options for container-runtime
	I0429 00:52:33.328356   66854 config.go:182] Loaded profile config "pause-934652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:52:33.328435   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHHostname
	I0429 00:52:33.331310   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:33.331622   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:33.331646   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:33.331875   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHPort
	I0429 00:52:33.332078   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:33.332244   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:33.332407   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHUsername
	I0429 00:52:33.332649   66854 main.go:141] libmachine: Using SSH client type: native
	I0429 00:52:33.332888   66854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0429 00:52:33.332914   66854 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 00:52:39.038309   66854 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 00:52:39.038340   66854 machine.go:97] duration metric: took 6.506639847s to provisionDockerMachine
	I0429 00:52:39.038356   66854 start.go:293] postStartSetup for "pause-934652" (driver="kvm2")
	I0429 00:52:39.038368   66854 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 00:52:39.038387   66854 main.go:141] libmachine: (pause-934652) Calling .DriverName
	I0429 00:52:39.038755   66854 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 00:52:39.038786   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHHostname
	I0429 00:52:39.042008   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:39.042514   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:39.042542   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:39.042826   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHPort
	I0429 00:52:39.043062   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:39.043226   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHUsername
	I0429 00:52:39.043443   66854 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/pause-934652/id_rsa Username:docker}
	I0429 00:52:39.146888   66854 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 00:52:39.153324   66854 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 00:52:39.153361   66854 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/addons for local assets ...
	I0429 00:52:39.153442   66854 filesync.go:126] Scanning /home/jenkins/minikube-integration/17977-13393/.minikube/files for local assets ...
	I0429 00:52:39.153561   66854 filesync.go:149] local asset: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem -> 207272.pem in /etc/ssl/certs
	I0429 00:52:39.153699   66854 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 00:52:39.169611   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:52:39.204554   66854 start.go:296] duration metric: took 166.183847ms for postStartSetup
	I0429 00:52:39.204603   66854 fix.go:56] duration metric: took 6.697050359s for fixHost
	I0429 00:52:39.204629   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHHostname
	I0429 00:52:39.207315   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:39.207752   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:39.207785   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:39.207974   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHPort
	I0429 00:52:39.208186   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:39.208346   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:39.208510   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHUsername
	I0429 00:52:39.208675   66854 main.go:141] libmachine: Using SSH client type: native
	I0429 00:52:39.208837   66854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0429 00:52:39.208849   66854 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 00:52:39.327879   66854 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714351959.318727459
	
	I0429 00:52:39.327904   66854 fix.go:216] guest clock: 1714351959.318727459
	I0429 00:52:39.327913   66854 fix.go:229] Guest: 2024-04-29 00:52:39.318727459 +0000 UTC Remote: 2024-04-29 00:52:39.204608597 +0000 UTC m=+24.640267738 (delta=114.118862ms)
	I0429 00:52:39.327938   66854 fix.go:200] guest clock delta is within tolerance: 114.118862ms
	I0429 00:52:39.327949   66854 start.go:83] releasing machines lock for "pause-934652", held for 6.820437515s
	I0429 00:52:39.327978   66854 main.go:141] libmachine: (pause-934652) Calling .DriverName
	I0429 00:52:39.328260   66854 main.go:141] libmachine: (pause-934652) Calling .GetIP
	I0429 00:52:39.331230   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:39.331640   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:39.331679   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:39.331856   66854 main.go:141] libmachine: (pause-934652) Calling .DriverName
	I0429 00:52:39.332443   66854 main.go:141] libmachine: (pause-934652) Calling .DriverName
	I0429 00:52:39.332632   66854 main.go:141] libmachine: (pause-934652) Calling .DriverName
	I0429 00:52:39.332715   66854 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 00:52:39.332752   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHHostname
	I0429 00:52:39.332867   66854 ssh_runner.go:195] Run: cat /version.json
	I0429 00:52:39.332891   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHHostname
	I0429 00:52:39.335583   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:39.335606   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:39.336046   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:39.336083   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:39.336110   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:39.336125   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:39.336220   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHPort
	I0429 00:52:39.336302   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHPort
	I0429 00:52:39.336375   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:39.336459   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHKeyPath
	I0429 00:52:39.336521   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHUsername
	I0429 00:52:39.336579   66854 main.go:141] libmachine: (pause-934652) Calling .GetSSHUsername
	I0429 00:52:39.336640   66854 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/pause-934652/id_rsa Username:docker}
	I0429 00:52:39.336716   66854 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/pause-934652/id_rsa Username:docker}
	I0429 00:52:39.420761   66854 ssh_runner.go:195] Run: systemctl --version
	I0429 00:52:39.456507   66854 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 00:52:39.619593   66854 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 00:52:39.626447   66854 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 00:52:39.757936   66854 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 00:52:39.772537   66854 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 00:52:39.772565   66854 start.go:494] detecting cgroup driver to use...
	I0429 00:52:39.772630   66854 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 00:52:39.795072   66854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 00:52:39.816101   66854 docker.go:217] disabling cri-docker service (if available) ...
	I0429 00:52:39.816170   66854 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 00:52:39.835709   66854 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 00:52:39.853809   66854 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 00:52:40.014156   66854 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 00:52:40.177004   66854 docker.go:233] disabling docker service ...
	I0429 00:52:40.177084   66854 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 00:52:40.196647   66854 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 00:52:40.212293   66854 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 00:52:40.377588   66854 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 00:52:40.535873   66854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 00:52:40.557445   66854 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 00:52:40.586457   66854 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 00:52:40.586532   66854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:52:40.602489   66854 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 00:52:40.602553   66854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:52:40.617496   66854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:52:40.632741   66854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:52:40.650080   66854 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 00:52:40.664550   66854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:52:40.680166   66854 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:52:40.692707   66854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 00:52:40.707531   66854 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 00:52:40.721425   66854 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 00:52:40.734904   66854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:52:40.899827   66854 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 00:52:47.563396   66854 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.663529088s)
	I0429 00:52:47.563441   66854 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 00:52:47.563513   66854 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 00:52:47.571693   66854 start.go:562] Will wait 60s for crictl version
	I0429 00:52:47.571752   66854 ssh_runner.go:195] Run: which crictl
	I0429 00:52:47.576784   66854 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 00:52:47.623883   66854 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 00:52:47.623980   66854 ssh_runner.go:195] Run: crio --version
	I0429 00:52:47.662426   66854 ssh_runner.go:195] Run: crio --version
	I0429 00:52:47.698607   66854 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 00:52:47.700297   66854 main.go:141] libmachine: (pause-934652) Calling .GetIP
	I0429 00:52:47.703532   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:47.703884   66854 main.go:141] libmachine: (pause-934652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:8f:da", ip: ""} in network mk-pause-934652: {Iface:virbr3 ExpiryTime:2024-04-29 01:51:28 +0000 UTC Type:0 Mac:52:54:00:52:8f:da Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:pause-934652 Clientid:01:52:54:00:52:8f:da}
	I0429 00:52:47.703918   66854 main.go:141] libmachine: (pause-934652) DBG | domain pause-934652 has defined IP address 192.168.39.185 and MAC address 52:54:00:52:8f:da in network mk-pause-934652
	I0429 00:52:47.704196   66854 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 00:52:47.710912   66854 kubeadm.go:877] updating cluster {Name:pause-934652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:pause-934652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 00:52:47.711125   66854 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 00:52:47.711179   66854 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:52:47.760238   66854 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:52:47.760279   66854 crio.go:433] Images already preloaded, skipping extraction
	I0429 00:52:47.760337   66854 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 00:52:47.818373   66854 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 00:52:47.818402   66854 cache_images.go:84] Images are preloaded, skipping loading
	I0429 00:52:47.818411   66854 kubeadm.go:928] updating node { 192.168.39.185 8443 v1.30.0 crio true true} ...
	I0429 00:52:47.818536   66854 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-934652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-934652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 00:52:47.818620   66854 ssh_runner.go:195] Run: crio config
	I0429 00:52:47.884879   66854 cni.go:84] Creating CNI manager for ""
	I0429 00:52:47.884909   66854 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 00:52:47.884925   66854 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 00:52:47.884953   66854 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-934652 NodeName:pause-934652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 00:52:47.885123   66854 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-934652"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 00:52:47.885206   66854 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 00:52:47.897676   66854 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 00:52:47.897756   66854 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 00:52:47.910033   66854 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0429 00:52:47.932577   66854 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 00:52:47.954151   66854 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0429 00:52:47.975282   66854 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0429 00:52:47.980416   66854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 00:52:48.123840   66854 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 00:52:48.141757   66854 certs.go:68] Setting up /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/pause-934652 for IP: 192.168.39.185
	I0429 00:52:48.141788   66854 certs.go:194] generating shared ca certs ...
	I0429 00:52:48.141808   66854 certs.go:226] acquiring lock for ca certs: {Name:mka5d31eea282c20e045cfb5e09273edc965e467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 00:52:48.142012   66854 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key
	I0429 00:52:48.142110   66854 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key
	I0429 00:52:48.142128   66854 certs.go:256] generating profile certs ...
	I0429 00:52:48.142255   66854 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/pause-934652/client.key
	I0429 00:52:48.142338   66854 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/pause-934652/apiserver.key.6f166631
	I0429 00:52:48.142390   66854 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/pause-934652/proxy-client.key
	I0429 00:52:48.142537   66854 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem (1338 bytes)
	W0429 00:52:48.142586   66854 certs.go:480] ignoring /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727_empty.pem, impossibly tiny 0 bytes
	I0429 00:52:48.142600   66854 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 00:52:48.142646   66854 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/ca.pem (1082 bytes)
	I0429 00:52:48.142679   66854 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/cert.pem (1123 bytes)
	I0429 00:52:48.142715   66854 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/certs/key.pem (1675 bytes)
	I0429 00:52:48.142775   66854 certs.go:484] found cert: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem (1708 bytes)
	I0429 00:52:48.143706   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 00:52:48.173949   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 00:52:48.207146   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 00:52:48.241997   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 00:52:48.280055   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/pause-934652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 00:52:48.314458   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/pause-934652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 00:52:48.344002   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/pause-934652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 00:52:48.374438   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/pause-934652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 00:52:48.403892   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/certs/20727.pem --> /usr/share/ca-certificates/20727.pem (1338 bytes)
	I0429 00:52:48.431404   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/ssl/certs/207272.pem --> /usr/share/ca-certificates/207272.pem (1708 bytes)
	I0429 00:52:48.526417   66854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17977-13393/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 00:52:48.694875   66854 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 00:52:48.784023   66854 ssh_runner.go:195] Run: openssl version
	I0429 00:52:48.845187   66854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20727.pem && ln -fs /usr/share/ca-certificates/20727.pem /etc/ssl/certs/20727.pem"
	I0429 00:52:48.946348   66854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20727.pem
	I0429 00:52:48.979363   66854 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 28 23:49 /usr/share/ca-certificates/20727.pem
	I0429 00:52:48.979448   66854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20727.pem
	I0429 00:52:49.005459   66854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20727.pem /etc/ssl/certs/51391683.0"
	I0429 00:52:49.034853   66854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207272.pem && ln -fs /usr/share/ca-certificates/207272.pem /etc/ssl/certs/207272.pem"
	I0429 00:52:49.180474   66854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207272.pem
	I0429 00:52:49.214586   66854 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 28 23:49 /usr/share/ca-certificates/207272.pem
	I0429 00:52:49.214662   66854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207272.pem
	I0429 00:52:49.232076   66854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207272.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 00:52:49.272616   66854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 00:52:49.312762   66854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:52:49.330797   66854 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 28 23:08 /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:52:49.330910   66854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 00:52:49.370929   66854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 00:52:49.516874   66854 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 00:52:49.529799   66854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 00:52:49.552498   66854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 00:52:49.571833   66854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 00:52:49.629259   66854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 00:52:49.645737   66854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 00:52:49.665461   66854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 00:52:49.694923   66854 kubeadm.go:391] StartCluster: {Name:pause-934652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-934652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:52:49.695065   66854 cri.go:56] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 00:52:49.695123   66854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 00:52:49.764115   66854 cri.go:91] found id: "3b288866277fe5cd13cb7026e6618e1f6b9faaf8353c56b94d118408c7b6ade6"
	I0429 00:52:49.764145   66854 cri.go:91] found id: "7fa42718ec838c8f49c9bb69e2af48e6bf6efeb011f98ff3457827da8bd7253b"
	I0429 00:52:49.764151   66854 cri.go:91] found id: "e1d5067da606911f451c9c6457f376bcdfdc15ea378323268638502d69910e64"
	I0429 00:52:49.764156   66854 cri.go:91] found id: "bd81162d7c15187ebcd522922c9a0361482152ebe1aa91a48d4de047f111d960"
	I0429 00:52:49.764160   66854 cri.go:91] found id: "51f23d939c32be0691533671c72fb6ec7b8928eae980fa0ee76117cb1faca213"
	I0429 00:52:49.764165   66854 cri.go:91] found id: "c7de719c6e2f195a14d3d4c0259020aab240910c358f41d8773e0336f409e8bd"
	I0429 00:52:49.764168   66854 cri.go:91] found id: "34bcfb979fabb8747b7d141a74b1ead8b90dcea9e595641c559ca2334f0e8711"
	I0429 00:52:49.764172   66854 cri.go:91] found id: "895f3f6deb8337b866a3df265a0b29a4698060967c093f020c63e938d06a5cfd"
	I0429 00:52:49.764175   66854 cri.go:91] found id: "230ab3045a89abcbae813792abbca75a3f708088a3332be53e80c47a0f1a3846"
	I0429 00:52:49.764183   66854 cri.go:91] found id: "99abe28865235e7e34e1e61e7527c5392d28b1a0f6c704c97cdca77483e87b28"
	I0429 00:52:49.764187   66854 cri.go:91] found id: "73e5aa30e1f3918556d3c6cf4e052c651709affbb514e6fd40568b4cbb6c73d0"
	I0429 00:52:49.764191   66854 cri.go:91] found id: ""
	I0429 00:52:49.764262   66854 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-934652 -n pause-934652
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-934652 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-934652 logs -n 25: (1.460111052s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-067605 sudo cat                            | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo cat                            | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo cat                            | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo cat                            | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo find                           | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo crio                           | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-067605                                     | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC | 29 Apr 24 00:50 UTC |
	| start   | -p pause-934652 --memory=2048                        | pause-934652              | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC | 29 Apr 24 00:52 UTC |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-634323                            | stopped-upgrade-634323    | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:51 UTC |
	| start   | -p cert-expiration-523983                            | cert-expiration-523983    | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:52 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-069355 sudo                          | NoKubernetes-069355       | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-069355                               | NoKubernetes-069355       | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:51 UTC |
	| start   | -p force-systemd-flag-106262                         | force-systemd-flag-106262 | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:52 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-934652                                      | pause-934652              | jenkins | v1.33.0 | 29 Apr 24 00:52 UTC | 29 Apr 24 00:53 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-106262 ssh cat                    | force-systemd-flag-106262 | jenkins | v1.33.0 | 29 Apr 24 00:52 UTC | 29 Apr 24 00:52 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-106262                         | force-systemd-flag-106262 | jenkins | v1.33.0 | 29 Apr 24 00:52 UTC | 29 Apr 24 00:52 UTC |
	| start   | -p cert-options-124477                               | cert-options-124477       | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-219055                         | kubernetes-upgrade-219055 | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC | 29 Apr 24 00:53 UTC |
	| start   | -p kubernetes-upgrade-219055                         | kubernetes-upgrade-219055 | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 00:53:19
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 00:53:19.554999   67629 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:53:19.555149   67629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:53:19.555162   67629 out.go:304] Setting ErrFile to fd 2...
	I0429 00:53:19.555170   67629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:53:19.555430   67629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:53:19.556021   67629 out.go:298] Setting JSON to false
	I0429 00:53:19.557171   67629 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9344,"bootTime":1714342656,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 00:53:19.557236   67629 start.go:139] virtualization: kvm guest
	I0429 00:53:19.559691   67629 out.go:177] * [kubernetes-upgrade-219055] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 00:53:19.561288   67629 out.go:177]   - MINIKUBE_LOCATION=17977
	I0429 00:53:19.561242   67629 notify.go:220] Checking for updates...
	I0429 00:53:19.562661   67629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 00:53:19.564037   67629 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0429 00:53:19.565364   67629 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:53:19.566633   67629 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 00:53:19.567973   67629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 00:53:19.569706   67629 config.go:182] Loaded profile config "kubernetes-upgrade-219055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 00:53:19.570188   67629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:53:19.570237   67629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:53:19.585772   67629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I0429 00:53:19.586414   67629 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:53:19.586928   67629 main.go:141] libmachine: Using API Version  1
	I0429 00:53:19.586954   67629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:53:19.587316   67629 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:53:19.587517   67629 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:53:19.587736   67629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 00:53:19.588003   67629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:53:19.588034   67629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:53:19.602349   67629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I0429 00:53:19.602713   67629 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:53:19.603243   67629 main.go:141] libmachine: Using API Version  1
	I0429 00:53:19.603270   67629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:53:19.603551   67629 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:53:19.603749   67629 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:53:19.638845   67629 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 00:53:19.640258   67629 start.go:297] selected driver: kvm2
	I0429 00:53:19.640280   67629 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-219055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:53:19.640392   67629 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 00:53:19.641045   67629 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:53:19.641119   67629 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 00:53:19.660398   67629 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 00:53:19.660989   67629 cni.go:84] Creating CNI manager for ""
	I0429 00:53:19.661017   67629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 00:53:19.661080   67629 start.go:340] cluster config:
	{Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-219055 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:53:19.661240   67629 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:53:19.663113   67629 out.go:177] * Starting "kubernetes-upgrade-219055" primary control-plane node in "kubernetes-upgrade-219055" cluster
	
	
	==> CRI-O <==
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.215760583Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd7c6f2a-f164-42a7-b270-3a4497838446 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.216926256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d1d7496-4759-40e4-8c95-b258d3b5b29f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.217272902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714352000217250798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d1d7496-4759-40e4-8c95-b258d3b5b29f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.217812482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92e4dc89-2e21-4ebc-b197-884565dfde4e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.217868166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92e4dc89-2e21-4ebc-b197-884565dfde4e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.218099815Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0768f7987b68ad0166c25ea43311413dd18f1d3001148d22af706c6b250ed9b,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714351980608845170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050e34b859543c93bbefed8ffd30a58c269473bf62c25be4731ba46007961ad,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714351976790990042,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd78284c304a529d392da3b27e753525fb59685c89a4adb157b79021116fd2c,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714351976770866945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ad21ae6aa74cc69020b38c5aa3bbcd5c74b472024c07203922f187fa8ca03,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714351976800317461,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14466fd12b72f3ce0cf9d921820b83bac655b16cd332244a8e658c7befb6cd2f,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714351976766193705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b07f1b0420ebbb550dd294397e4b87ed9422391a7f647fe49fac5d725c80f06,PodSandboxId:f4d5387b4775719e7cd996e95ec4489dd0eb8bcf637a5ce30c1be253991ad581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714351969905186613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa42718ec838c8f49c9bb69e2af48e6bf6efeb011f98ff3457827da8bd7253b,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714351969035258219,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd81162d7c15187ebcd522922c9a0361482152ebe1aa91a48d4de047f111d960,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714351968992849922,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d5067da606911f451c9c6457f376bcdfdc15ea378323268638502d69910e64,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714351969013854712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b288866277fe5cd13cb7026e6618e1f6b9faaf8353c56b94d118408c7b6ade6,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714351969057346589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f23d939c32be0691533671c72fb6ec7b8928eae980fa0ee76117cb1faca213,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714351968851821563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7de719c6e2f195a14d3d4c0259020aab240910c358f41d8773e0336f409e8bd,PodSandboxId:041023e2e25ae7df006272a81327b58912d8e521b93736072469d38b31fd0820,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714351931384027928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92e4dc89-2e21-4ebc-b197-884565dfde4e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.264487743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3991a89b-b672-4b37-884b-4740fcf3bd05 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.264591299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3991a89b-b672-4b37-884b-4740fcf3bd05 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.266243297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e3432b5-8fae-441a-8269-81eea8328e52 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.266688100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714352000266666469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e3432b5-8fae-441a-8269-81eea8328e52 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.267335870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13ee1a2c-0215-490f-9dbf-564484206053 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.267445013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13ee1a2c-0215-490f-9dbf-564484206053 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.267702265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0768f7987b68ad0166c25ea43311413dd18f1d3001148d22af706c6b250ed9b,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714351980608845170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050e34b859543c93bbefed8ffd30a58c269473bf62c25be4731ba46007961ad,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714351976790990042,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd78284c304a529d392da3b27e753525fb59685c89a4adb157b79021116fd2c,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714351976770866945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ad21ae6aa74cc69020b38c5aa3bbcd5c74b472024c07203922f187fa8ca03,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714351976800317461,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14466fd12b72f3ce0cf9d921820b83bac655b16cd332244a8e658c7befb6cd2f,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714351976766193705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b07f1b0420ebbb550dd294397e4b87ed9422391a7f647fe49fac5d725c80f06,PodSandboxId:f4d5387b4775719e7cd996e95ec4489dd0eb8bcf637a5ce30c1be253991ad581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714351969905186613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa42718ec838c8f49c9bb69e2af48e6bf6efeb011f98ff3457827da8bd7253b,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714351969035258219,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd81162d7c15187ebcd522922c9a0361482152ebe1aa91a48d4de047f111d960,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714351968992849922,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d5067da606911f451c9c6457f376bcdfdc15ea378323268638502d69910e64,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714351969013854712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b288866277fe5cd13cb7026e6618e1f6b9faaf8353c56b94d118408c7b6ade6,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714351969057346589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f23d939c32be0691533671c72fb6ec7b8928eae980fa0ee76117cb1faca213,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714351968851821563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7de719c6e2f195a14d3d4c0259020aab240910c358f41d8773e0336f409e8bd,PodSandboxId:041023e2e25ae7df006272a81327b58912d8e521b93736072469d38b31fd0820,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714351931384027928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13ee1a2c-0215-490f-9dbf-564484206053 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.323613586Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=739dee53-37a7-4c25-80af-23d20532d1d0 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.323686543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=739dee53-37a7-4c25-80af-23d20532d1d0 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.325030151Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa4665e6-7664-433a-a7be-45df70ac63f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.325518694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714352000325374996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa4665e6-7664-433a-a7be-45df70ac63f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.326135066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81658035-e939-4841-b04c-3315a783ea61 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.326225968Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81658035-e939-4841-b04c-3315a783ea61 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.326545718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0768f7987b68ad0166c25ea43311413dd18f1d3001148d22af706c6b250ed9b,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714351980608845170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050e34b859543c93bbefed8ffd30a58c269473bf62c25be4731ba46007961ad,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714351976790990042,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd78284c304a529d392da3b27e753525fb59685c89a4adb157b79021116fd2c,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714351976770866945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ad21ae6aa74cc69020b38c5aa3bbcd5c74b472024c07203922f187fa8ca03,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714351976800317461,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14466fd12b72f3ce0cf9d921820b83bac655b16cd332244a8e658c7befb6cd2f,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714351976766193705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b07f1b0420ebbb550dd294397e4b87ed9422391a7f647fe49fac5d725c80f06,PodSandboxId:f4d5387b4775719e7cd996e95ec4489dd0eb8bcf637a5ce30c1be253991ad581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714351969905186613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa42718ec838c8f49c9bb69e2af48e6bf6efeb011f98ff3457827da8bd7253b,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714351969035258219,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd81162d7c15187ebcd522922c9a0361482152ebe1aa91a48d4de047f111d960,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714351968992849922,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d5067da606911f451c9c6457f376bcdfdc15ea378323268638502d69910e64,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714351969013854712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b288866277fe5cd13cb7026e6618e1f6b9faaf8353c56b94d118408c7b6ade6,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714351969057346589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f23d939c32be0691533671c72fb6ec7b8928eae980fa0ee76117cb1faca213,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714351968851821563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7de719c6e2f195a14d3d4c0259020aab240910c358f41d8773e0336f409e8bd,PodSandboxId:041023e2e25ae7df006272a81327b58912d8e521b93736072469d38b31fd0820,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714351931384027928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81658035-e939-4841-b04c-3315a783ea61 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.339167157Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ec15dab-d3c8-4fd0-98b6-fedcc4638f18 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.339738812Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f4d5387b4775719e7cd996e95ec4489dd0eb8bcf637a5ce30c1be253991ad581,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sn9xc,Uid:48b4272a-1a80-45cf-a204-40298df52fce,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714351968735957114,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T00:52:10.628198648Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&PodSandboxMetadata{Name:etcd-pause-934652,Uid:edf94d85957615174e22ded817a97d9e,Namespace:kube-system,Attempt:1,
},State:SANDBOX_READY,CreatedAt:1714351968578080788,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.185:2379,kubernetes.io/config.hash: edf94d85957615174e22ded817a97d9e,kubernetes.io/config.seen: 2024-04-29T00:51:57.116071142Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-934652,Uid:55a24b50dd0cf5d551986832494ade71,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714351968557152662,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 55a24b50dd0cf5d551986832494ade71,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.185:8443,kubernetes.io/config.hash: 55a24b50dd0cf5d551986832494ade71,kubernetes.io/config.seen: 2024-04-29T00:51:57.116072440Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-934652,Uid:250db48c918b5f1a2893ba69b1006715,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714351968545780231,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 250db48c918b5f1a2893ba69b1006715,kubernetes.io/config.seen: 2024-04-29T00:51:57.116067700Z,kuberne
tes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-934652,Uid:a8618c73b4cf8d62c17731e4f0958049,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714351968527472293,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a8618c73b4cf8d62c17731e4f0958049,kubernetes.io/config.seen: 2024-04-29T00:51:57.116073382Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&PodSandboxMetadata{Name:kube-proxy-g5fvm,Uid:de3a2710-df1b-486c-b242-1ec7766c66f2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1714351968512630732,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T00:52:10.371003154Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3ec15dab-d3c8-4fd0-98b6-fedcc4638f18 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.340255162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae691c1b-deaa-404d-b427-3b6d6d5718ab name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.340307088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae691c1b-deaa-404d-b427-3b6d6d5718ab name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:20 pause-934652 crio[2228]: time="2024-04-29 00:53:20.341117025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0768f7987b68ad0166c25ea43311413dd18f1d3001148d22af706c6b250ed9b,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714351980608845170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050e34b859543c93bbefed8ffd30a58c269473bf62c25be4731ba46007961ad,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714351976790990042,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd78284c304a529d392da3b27e753525fb59685c89a4adb157b79021116fd2c,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714351976770866945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ad21ae6aa74cc69020b38c5aa3bbcd5c74b472024c07203922f187fa8ca03,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714351976800317461,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14466fd12b72f3ce0cf9d921820b83bac655b16cd332244a8e658c7befb6cd2f,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714351976766193705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b07f1b0420ebbb550dd294397e4b87ed9422391a7f647fe49fac5d725c80f06,PodSandboxId:f4d5387b4775719e7cd996e95ec4489dd0eb8bcf637a5ce30c1be253991ad581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714351969905186613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae691c1b-deaa-404d-b427-3b6d6d5718ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b0768f7987b68       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   19 seconds ago       Running             kube-proxy                2                   d3b435dac5750       kube-proxy-g5fvm
	346ad21ae6aa7       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   23 seconds ago       Running             kube-controller-manager   2                   a53a0d436ddff       kube-controller-manager-pause-934652
	3050e34b85954       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   23 seconds ago       Running             kube-scheduler            2                   6c476d5d1ec84       kube-scheduler-pause-934652
	ddd78284c304a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago       Running             etcd                      2                   d3d4a0c4e4712       etcd-pause-934652
	14466fd12b72f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   23 seconds ago       Running             kube-apiserver            2                   32b733ddc8254       kube-apiserver-pause-934652
	0b07f1b0420eb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   30 seconds ago       Running             coredns                   1                   f4d5387b47757       coredns-7db6d8ff4d-sn9xc
	3b288866277fe       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   31 seconds ago       Exited              kube-apiserver            1                   32b733ddc8254       kube-apiserver-pause-934652
	7fa42718ec838       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   31 seconds ago       Exited              etcd                      1                   d3d4a0c4e4712       etcd-pause-934652
	e1d5067da6069       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   31 seconds ago       Exited              kube-scheduler            1                   6c476d5d1ec84       kube-scheduler-pause-934652
	bd81162d7c151       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   31 seconds ago       Exited              kube-controller-manager   1                   a53a0d436ddff       kube-controller-manager-pause-934652
	51f23d939c32b       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   31 seconds ago       Exited              kube-proxy                1                   d3b435dac5750       kube-proxy-g5fvm
	c7de719c6e2f1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   041023e2e25ae       coredns-7db6d8ff4d-sn9xc
	
	
	==> coredns [0b07f1b0420ebbb550dd294397e4b87ed9422391a7f647fe49fac5d725c80f06] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46675 - 28533 "HINFO IN 3722021531145092725.8522558893245719435. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01341867s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [c7de719c6e2f195a14d3d4c0259020aab240910c358f41d8773e0336f409e8bd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47656 - 59016 "HINFO IN 2795413329019050417.1007320012138280563. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014034528s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-934652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-934652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=pause-934652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T00_51_57_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:51:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-934652
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:53:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:53:00 +0000   Mon, 29 Apr 2024 00:51:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:53:00 +0000   Mon, 29 Apr 2024 00:51:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:53:00 +0000   Mon, 29 Apr 2024 00:51:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:53:00 +0000   Mon, 29 Apr 2024 00:51:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    pause-934652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 4289a3a00fa14b86b0bee51df6af8ccd
	  System UUID:                4289a3a0-0fa1-4b86-b0be-e51df6af8ccd
	  Boot ID:                    3931d743-0895-4dbd-ab09-be4f028480db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-sn9xc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     70s
	  kube-system                 etcd-pause-934652                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 kube-apiserver-pause-934652             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-pause-934652    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-g5fvm                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-scheduler-pause-934652             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 69s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     83s                kubelet          Node pause-934652 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  83s                kubelet          Node pause-934652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s                kubelet          Node pause-934652 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeReady                82s                kubelet          Node pause-934652 status is now: NodeReady
	  Normal  RegisteredNode           71s                node-controller  Node pause-934652 event: Registered Node pause-934652 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-934652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-934652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-934652 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-934652 event: Registered Node pause-934652 in Controller
	
	
	==> dmesg <==
	[  +0.073163] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074118] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.169348] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.170872] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.329271] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +5.221670] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.065379] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.198553] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +1.079562] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.978497] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +0.087612] kauditd_printk_skb: 41 callbacks suppressed
	[Apr29 00:52] systemd-fstab-generator[1495]: Ignoring "noauto" option for root device
	[  +0.169543] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.954068] kauditd_printk_skb: 69 callbacks suppressed
	[ +18.936312] systemd-fstab-generator[2147]: Ignoring "noauto" option for root device
	[  +0.153373] systemd-fstab-generator[2160]: Ignoring "noauto" option for root device
	[  +0.190372] systemd-fstab-generator[2174]: Ignoring "noauto" option for root device
	[  +0.169937] systemd-fstab-generator[2186]: Ignoring "noauto" option for root device
	[  +0.356554] systemd-fstab-generator[2214]: Ignoring "noauto" option for root device
	[  +7.233804] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +0.085482] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.557881] kauditd_printk_skb: 85 callbacks suppressed
	[  +2.343177] systemd-fstab-generator[3085]: Ignoring "noauto" option for root device
	[  +4.689386] kauditd_printk_skb: 42 callbacks suppressed
	[Apr29 00:53] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	
	
	==> etcd [7fa42718ec838c8f49c9bb69e2af48e6bf6efeb011f98ff3457827da8bd7253b] <==
	{"level":"info","ts":"2024-04-29T00:52:50.094336Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:52:51.431614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T00:52:51.431676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T00:52:51.431713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2024-04-29T00:52:51.431739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:51.431745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:51.431756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:51.431763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:51.434122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:52:51.436541Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:pause-934652 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T00:52:51.436743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:52:51.437061Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:52:51.437149Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T00:52:51.436777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	{"level":"info","ts":"2024-04-29T00:52:51.438715Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T00:52:53.207446Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-29T00:52:53.207543Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-934652","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	{"level":"warn","ts":"2024-04-29T00:52:53.207649Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:52:53.207745Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:52:53.209304Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:52:53.209328Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T00:52:53.210776Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8fbc2df34e14192d","current-leader-member-id":"8fbc2df34e14192d"}
	{"level":"info","ts":"2024-04-29T00:52:53.214667Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-29T00:52:53.214744Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-29T00:52:53.214767Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-934652","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	
	
	==> etcd [ddd78284c304a529d392da3b27e753525fb59685c89a4adb157b79021116fd2c] <==
	{"level":"info","ts":"2024-04-29T00:52:57.23238Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:52:57.252538Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:52:57.257657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d switched to configuration voters=(10357203766055541037)"}
	{"level":"info","ts":"2024-04-29T00:52:57.257769Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","added-peer-id":"8fbc2df34e14192d","added-peer-peer-urls":["https://192.168.39.185:2380"]}
	{"level":"info","ts":"2024-04-29T00:52:57.257876Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:52:57.257916Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:52:57.260942Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T00:52:57.261153Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8fbc2df34e14192d","initial-advertise-peer-urls":["https://192.168.39.185:2380"],"listen-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.185:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T00:52:57.261206Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T00:52:57.261322Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-29T00:52:57.261354Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-29T00:52:58.284492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:58.284579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:58.284609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:58.284621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 4"}
	{"level":"info","ts":"2024-04-29T00:52:58.284626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 4"}
	{"level":"info","ts":"2024-04-29T00:52:58.284634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 4"}
	{"level":"info","ts":"2024-04-29T00:52:58.284641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 4"}
	{"level":"info","ts":"2024-04-29T00:52:58.295652Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:pause-934652 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T00:52:58.295789Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:52:58.299758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T00:52:58.325461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:52:58.330055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	{"level":"info","ts":"2024-04-29T00:52:58.333476Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:52:58.333545Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:53:20 up 2 min,  0 users,  load average: 0.90, 0.31, 0.11
	Linux pause-934652 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14466fd12b72f3ce0cf9d921820b83bac655b16cd332244a8e658c7befb6cd2f] <==
	I0429 00:52:59.880859       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0429 00:52:59.880884       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 00:52:59.881302       1 aggregator.go:165] initial CRD sync complete...
	I0429 00:52:59.881346       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 00:52:59.881370       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 00:52:59.881870       1 cache.go:39] Caches are synced for autoregister controller
	I0429 00:52:59.911450       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 00:52:59.944078       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 00:53:00.004729       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 00:53:00.005520       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 00:53:00.004820       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 00:53:00.006070       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 00:53:00.006778       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 00:53:00.021476       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 00:53:00.028166       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 00:53:00.028188       1 policy_source.go:224] refreshing policies
	I0429 00:53:00.049130       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 00:53:00.821135       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 00:53:01.511784       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 00:53:01.529859       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:53:01.588536       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:53:01.623990       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 00:53:01.631738       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 00:53:12.191759       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:53:12.326614       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [3b288866277fe5cd13cb7026e6618e1f6b9faaf8353c56b94d118408c7b6ade6] <==
	I0429 00:52:52.906368       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 00:52:52.906518       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 00:52:52.906561       1 crd_finalizer.go:270] Shutting down CRDFinalizer
	I0429 00:52:52.906580       1 apiapproval_controller.go:190] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0429 00:52:52.906597       1 nonstructuralschema_controller.go:196] Shutting down NonStructuralSchemaConditionController
	I0429 00:52:52.906607       1 establishing_controller.go:80] Shutting down EstablishingController
	I0429 00:52:52.906625       1 naming_controller.go:295] Shutting down NamingConditionController
	E0429 00:52:52.906636       1 controller.go:92] timed out waiting for caches to sync
	E0429 00:52:52.906649       1 controller.go:145] timed out waiting for caches to sync
	E0429 00:52:52.906691       1 shared_informer.go:316] unable to sync caches for crd-autoregister
	F0429 00:52:52.906705       1 hooks.go:203] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	E0429 00:52:52.970142       1 shared_informer.go:316] unable to sync caches for configmaps
	I0429 00:52:52.970193       1 controller.go:121] Shutting down legacy_token_tracking_controller
	E0429 00:52:52.970213       1 shared_informer.go:316] unable to sync caches for cluster_authentication_trust_controller
	E0429 00:52:52.970225       1 customresource_discovery_controller.go:292] timed out waiting for caches to sync
	I0429 00:52:52.970266       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0429 00:52:52.970280       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	F0429 00:52:52.970288       1 hooks.go:203] PostStartHook "crd-informer-synced" failed: timed out waiting for the condition
	E0429 00:52:53.043249       1 gc_controller.go:84] timed out waiting for caches to sync
	I0429 00:52:53.043312       1 gc_controller.go:85] Shutting down apiserver lease garbage collector
	I0429 00:52:53.043621       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0429 00:52:53.043740       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0429 00:52:53.044571       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0429 00:52:53.047610       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0429 00:52:53.051554       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [346ad21ae6aa74cc69020b38c5aa3bbcd5c74b472024c07203922f187fa8ca03] <==
	I0429 00:53:12.203931       1 shared_informer.go:320] Caches are synced for cronjob
	I0429 00:53:12.204146       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 00:53:12.208095       1 shared_informer.go:320] Caches are synced for PVC protection
	I0429 00:53:12.211031       1 shared_informer.go:320] Caches are synced for taint
	I0429 00:53:12.211191       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 00:53:12.211273       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-934652"
	I0429 00:53:12.211505       1 shared_informer.go:320] Caches are synced for crt configmap
	I0429 00:53:12.214123       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0429 00:53:12.211384       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0429 00:53:12.219037       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0429 00:53:12.219196       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0429 00:53:12.219371       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0429 00:53:12.219737       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0429 00:53:12.219901       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0429 00:53:12.224727       1 shared_informer.go:320] Caches are synced for TTL
	I0429 00:53:12.227785       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0429 00:53:12.301745       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0429 00:53:12.314618       1 shared_informer.go:320] Caches are synced for endpoint
	I0429 00:53:12.372263       1 shared_informer.go:320] Caches are synced for disruption
	I0429 00:53:12.414431       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 00:53:12.418069       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 00:53:12.443787       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 00:53:12.855092       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 00:53:12.855142       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 00:53:12.858053       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [bd81162d7c15187ebcd522922c9a0361482152ebe1aa91a48d4de047f111d960] <==
	I0429 00:52:50.605668       1 serving.go:380] Generated self-signed cert in-memory
	I0429 00:52:50.853766       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0429 00:52:50.853846       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:52:50.855892       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0429 00:52:50.857753       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 00:52:50.857788       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 00:52:50.857798       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [51f23d939c32be0691533671c72fb6ec7b8928eae980fa0ee76117cb1faca213] <==
	I0429 00:52:50.507304       1 server_linux.go:69] "Using iptables proxy"
	E0429 00:52:54.071564       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-934652\": dial tcp 192.168.39.185:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.185:35714->192.168.39.185:8443: read: connection reset by peer"
	
	
	==> kube-proxy [b0768f7987b68ad0166c25ea43311413dd18f1d3001148d22af706c6b250ed9b] <==
	I0429 00:53:00.814373       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:53:00.845916       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	I0429 00:53:00.924824       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:53:00.924884       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:53:00.924902       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:53:00.932603       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:53:00.932843       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:53:00.932899       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:53:00.934301       1 config.go:192] "Starting service config controller"
	I0429 00:53:00.934356       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:53:00.934454       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:53:00.934461       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:53:00.935058       1 config.go:319] "Starting node config controller"
	I0429 00:53:00.935127       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:53:01.035464       1 shared_informer.go:320] Caches are synced for node config
	I0429 00:53:01.035604       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:53:01.035727       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3050e34b859543c93bbefed8ffd30a58c269473bf62c25be4731ba46007961ad] <==
	W0429 00:52:59.932213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:52:59.932250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:52:59.932481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 00:52:59.932524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 00:52:59.932579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:52:59.932616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:52:59.932788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 00:52:59.932827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 00:52:59.932933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:52:59.932970       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:52:59.933032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:52:59.933067       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:52:59.933154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:52:59.933190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:52:59.933246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:52:59.933254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 00:52:59.933291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:52:59.933327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:52:59.935477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 00:52:59.935528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 00:52:59.935588       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 00:52:59.935652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:52:59.935736       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 00:52:59.935773       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0429 00:53:01.301460       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e1d5067da606911f451c9c6457f376bcdfdc15ea378323268638502d69910e64] <==
	I0429 00:52:50.939843       1 serving.go:380] Generated self-signed cert in-memory
	W0429 00:52:54.065096       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.39.185:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.185:8443: connect: connection refused - error from a previous attempt: EOF
	W0429 00:52:54.065121       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 00:52:54.065128       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 00:52:54.079351       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 00:52:54.079520       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:52:54.081641       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 00:52:54.081791       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0429 00:52:54.082298       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0429 00:52:54.082801       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.500832    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8618c73b4cf8d62c17731e4f0958049-k8s-certs\") pod \"kube-controller-manager-pause-934652\" (UID: \"a8618c73b4cf8d62c17731e4f0958049\") " pod="kube-system/kube-controller-manager-pause-934652"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.500858    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8618c73b4cf8d62c17731e4f0958049-kubeconfig\") pod \"kube-controller-manager-pause-934652\" (UID: \"a8618c73b4cf8d62c17731e4f0958049\") " pod="kube-system/kube-controller-manager-pause-934652"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.500874    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/250db48c918b5f1a2893ba69b1006715-kubeconfig\") pod \"kube-scheduler-pause-934652\" (UID: \"250db48c918b5f1a2893ba69b1006715\") " pod="kube-system/kube-scheduler-pause-934652"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: E0429 00:52:56.503381    3092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-934652?timeout=10s\": dial tcp 192.168.39.185:8443: connect: connection refused" interval="400ms"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.598522    3092 kubelet_node_status.go:73] "Attempting to register node" node="pause-934652"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: E0429 00:52:56.599555    3092 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.185:8443: connect: connection refused" node="pause-934652"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.741359    3092 scope.go:117] "RemoveContainer" containerID="7fa42718ec838c8f49c9bb69e2af48e6bf6efeb011f98ff3457827da8bd7253b"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.743282    3092 scope.go:117] "RemoveContainer" containerID="3b288866277fe5cd13cb7026e6618e1f6b9faaf8353c56b94d118408c7b6ade6"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.745111    3092 scope.go:117] "RemoveContainer" containerID="bd81162d7c15187ebcd522922c9a0361482152ebe1aa91a48d4de047f111d960"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.751683    3092 scope.go:117] "RemoveContainer" containerID="e1d5067da606911f451c9c6457f376bcdfdc15ea378323268638502d69910e64"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: E0429 00:52:56.905016    3092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-934652?timeout=10s\": dial tcp 192.168.39.185:8443: connect: connection refused" interval="800ms"
	Apr 29 00:52:57 pause-934652 kubelet[3092]: I0429 00:52:57.003679    3092 kubelet_node_status.go:73] "Attempting to register node" node="pause-934652"
	Apr 29 00:52:57 pause-934652 kubelet[3092]: E0429 00:52:57.004623    3092 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.185:8443: connect: connection refused" node="pause-934652"
	Apr 29 00:52:57 pause-934652 kubelet[3092]: I0429 00:52:57.807750    3092 kubelet_node_status.go:73] "Attempting to register node" node="pause-934652"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.101838    3092 kubelet_node_status.go:112] "Node was previously registered" node="pause-934652"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.102317    3092 kubelet_node_status.go:76] "Successfully registered node" node="pause-934652"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.103919    3092 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.104860    3092 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.277702    3092 apiserver.go:52] "Watching apiserver"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.280827    3092 topology_manager.go:215] "Topology Admit Handler" podUID="48b4272a-1a80-45cf-a204-40298df52fce" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sn9xc"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.282183    3092 topology_manager.go:215] "Topology Admit Handler" podUID="de3a2710-df1b-486c-b242-1ec7766c66f2" podNamespace="kube-system" podName="kube-proxy-g5fvm"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.293125    3092 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.332353    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de3a2710-df1b-486c-b242-1ec7766c66f2-xtables-lock\") pod \"kube-proxy-g5fvm\" (UID: \"de3a2710-df1b-486c-b242-1ec7766c66f2\") " pod="kube-system/kube-proxy-g5fvm"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.333089    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de3a2710-df1b-486c-b242-1ec7766c66f2-lib-modules\") pod \"kube-proxy-g5fvm\" (UID: \"de3a2710-df1b-486c-b242-1ec7766c66f2\") " pod="kube-system/kube-proxy-g5fvm"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.583878    3092 scope.go:117] "RemoveContainer" containerID="51f23d939c32be0691533671c72fb6ec7b8928eae980fa0ee76117cb1faca213"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-934652 -n pause-934652
helpers_test.go:261: (dbg) Run:  kubectl --context pause-934652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-934652 -n pause-934652
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-934652 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-934652 logs -n 25: (1.400980204s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-067605 sudo cat                            | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo cat                            | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo cat                            | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo cat                            | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo                                | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo find                           | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-067605 sudo crio                           | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-067605                                     | cilium-067605             | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC | 29 Apr 24 00:50 UTC |
	| start   | -p pause-934652 --memory=2048                        | pause-934652              | jenkins | v1.33.0 | 29 Apr 24 00:50 UTC | 29 Apr 24 00:52 UTC |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-634323                            | stopped-upgrade-634323    | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:51 UTC |
	| start   | -p cert-expiration-523983                            | cert-expiration-523983    | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:52 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-069355 sudo                          | NoKubernetes-069355       | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-069355                               | NoKubernetes-069355       | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:51 UTC |
	| start   | -p force-systemd-flag-106262                         | force-systemd-flag-106262 | jenkins | v1.33.0 | 29 Apr 24 00:51 UTC | 29 Apr 24 00:52 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-934652                                      | pause-934652              | jenkins | v1.33.0 | 29 Apr 24 00:52 UTC | 29 Apr 24 00:53 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-106262 ssh cat                    | force-systemd-flag-106262 | jenkins | v1.33.0 | 29 Apr 24 00:52 UTC | 29 Apr 24 00:52 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-106262                         | force-systemd-flag-106262 | jenkins | v1.33.0 | 29 Apr 24 00:52 UTC | 29 Apr 24 00:52 UTC |
	| start   | -p cert-options-124477                               | cert-options-124477       | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-219055                         | kubernetes-upgrade-219055 | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC | 29 Apr 24 00:53 UTC |
	| start   | -p kubernetes-upgrade-219055                         | kubernetes-upgrade-219055 | jenkins | v1.33.0 | 29 Apr 24 00:53 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 00:53:19
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 00:53:19.554999   67629 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:53:19.555149   67629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:53:19.555162   67629 out.go:304] Setting ErrFile to fd 2...
	I0429 00:53:19.555170   67629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:53:19.555430   67629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:53:19.556021   67629 out.go:298] Setting JSON to false
	I0429 00:53:19.557171   67629 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9344,"bootTime":1714342656,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 00:53:19.557236   67629 start.go:139] virtualization: kvm guest
	I0429 00:53:19.559691   67629 out.go:177] * [kubernetes-upgrade-219055] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 00:53:19.561288   67629 out.go:177]   - MINIKUBE_LOCATION=17977
	I0429 00:53:19.561242   67629 notify.go:220] Checking for updates...
	I0429 00:53:19.562661   67629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 00:53:19.564037   67629 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0429 00:53:19.565364   67629 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0429 00:53:19.566633   67629 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 00:53:19.567973   67629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 00:53:19.569706   67629 config.go:182] Loaded profile config "kubernetes-upgrade-219055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 00:53:19.570188   67629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:53:19.570237   67629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:53:19.585772   67629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I0429 00:53:19.586414   67629 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:53:19.586928   67629 main.go:141] libmachine: Using API Version  1
	I0429 00:53:19.586954   67629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:53:19.587316   67629 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:53:19.587517   67629 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:53:19.587736   67629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 00:53:19.588003   67629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:53:19.588034   67629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:53:19.602349   67629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I0429 00:53:19.602713   67629 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:53:19.603243   67629 main.go:141] libmachine: Using API Version  1
	I0429 00:53:19.603270   67629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:53:19.603551   67629 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:53:19.603749   67629 main.go:141] libmachine: (kubernetes-upgrade-219055) Calling .DriverName
	I0429 00:53:19.638845   67629 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 00:53:19.640258   67629 start.go:297] selected driver: kvm2
	I0429 00:53:19.640280   67629 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-219055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:53:19.640392   67629 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 00:53:19.641045   67629 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:53:19.641119   67629 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 00:53:19.660398   67629 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 00:53:19.660989   67629 cni.go:84] Creating CNI manager for ""
	I0429 00:53:19.661017   67629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 00:53:19.661080   67629 start.go:340] cluster config:
	{Name:kubernetes-upgrade-219055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-219055 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 00:53:19.661240   67629 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 00:53:19.663113   67629 out.go:177] * Starting "kubernetes-upgrade-219055" primary control-plane node in "kubernetes-upgrade-219055" cluster
	I0429 00:53:18.843488   67339 main.go:141] libmachine: (cert-options-124477) DBG | domain cert-options-124477 has defined MAC address 52:54:00:1d:ec:f6 in network mk-cert-options-124477
	I0429 00:53:18.844051   67339 main.go:141] libmachine: (cert-options-124477) DBG | unable to find current IP address of domain cert-options-124477 in network mk-cert-options-124477
	I0429 00:53:18.844072   67339 main.go:141] libmachine: (cert-options-124477) DBG | I0429 00:53:18.843991   67361 retry.go:31] will retry after 4.569952616s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.252480420Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52258c3b-3fd3-40f7-a508-751e3478369d name=/runtime.v1.RuntimeService/Version
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.253556660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f205c59-0c4b-42e6-ab32-9394e11f0b99 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.253942729Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714352002253919653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f205c59-0c4b-42e6-ab32-9394e11f0b99 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.254496767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9976097-e3eb-4e4a-bfc6-0c24516da147 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.254573821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9976097-e3eb-4e4a-bfc6-0c24516da147 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.254810313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0768f7987b68ad0166c25ea43311413dd18f1d3001148d22af706c6b250ed9b,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714351980608845170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050e34b859543c93bbefed8ffd30a58c269473bf62c25be4731ba46007961ad,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714351976790990042,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd78284c304a529d392da3b27e753525fb59685c89a4adb157b79021116fd2c,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714351976770866945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ad21ae6aa74cc69020b38c5aa3bbcd5c74b472024c07203922f187fa8ca03,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714351976800317461,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14466fd12b72f3ce0cf9d921820b83bac655b16cd332244a8e658c7befb6cd2f,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714351976766193705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b07f1b0420ebbb550dd294397e4b87ed9422391a7f647fe49fac5d725c80f06,PodSandboxId:f4d5387b4775719e7cd996e95ec4489dd0eb8bcf637a5ce30c1be253991ad581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714351969905186613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa42718ec838c8f49c9bb69e2af48e6bf6efeb011f98ff3457827da8bd7253b,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714351969035258219,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd81162d7c15187ebcd522922c9a0361482152ebe1aa91a48d4de047f111d960,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714351968992849922,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d5067da606911f451c9c6457f376bcdfdc15ea378323268638502d69910e64,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714351969013854712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b288866277fe5cd13cb7026e6618e1f6b9faaf8353c56b94d118408c7b6ade6,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714351969057346589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f23d939c32be0691533671c72fb6ec7b8928eae980fa0ee76117cb1faca213,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714351968851821563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7de719c6e2f195a14d3d4c0259020aab240910c358f41d8773e0336f409e8bd,PodSandboxId:041023e2e25ae7df006272a81327b58912d8e521b93736072469d38b31fd0820,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714351931384027928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9976097-e3eb-4e4a-bfc6-0c24516da147 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.297032269Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62da1d3c-1ae8-44f8-973a-c32b5417cb0c name=/runtime.v1.RuntimeService/Version
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.297134968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62da1d3c-1ae8-44f8-973a-c32b5417cb0c name=/runtime.v1.RuntimeService/Version
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.299254715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=390ac339-39f3-4fe3-b103-58069137100e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.299702961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714352002299682055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=390ac339-39f3-4fe3-b103-58069137100e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.300671235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b931b15-2309-4f6c-b17f-99598e00c2dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.300745718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b931b15-2309-4f6c-b17f-99598e00c2dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.300983137Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0768f7987b68ad0166c25ea43311413dd18f1d3001148d22af706c6b250ed9b,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714351980608845170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050e34b859543c93bbefed8ffd30a58c269473bf62c25be4731ba46007961ad,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714351976790990042,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd78284c304a529d392da3b27e753525fb59685c89a4adb157b79021116fd2c,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714351976770866945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ad21ae6aa74cc69020b38c5aa3bbcd5c74b472024c07203922f187fa8ca03,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714351976800317461,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14466fd12b72f3ce0cf9d921820b83bac655b16cd332244a8e658c7befb6cd2f,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714351976766193705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b07f1b0420ebbb550dd294397e4b87ed9422391a7f647fe49fac5d725c80f06,PodSandboxId:f4d5387b4775719e7cd996e95ec4489dd0eb8bcf637a5ce30c1be253991ad581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714351969905186613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa42718ec838c8f49c9bb69e2af48e6bf6efeb011f98ff3457827da8bd7253b,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714351969035258219,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd81162d7c15187ebcd522922c9a0361482152ebe1aa91a48d4de047f111d960,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714351968992849922,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d5067da606911f451c9c6457f376bcdfdc15ea378323268638502d69910e64,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714351969013854712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b288866277fe5cd13cb7026e6618e1f6b9faaf8353c56b94d118408c7b6ade6,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714351969057346589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f23d939c32be0691533671c72fb6ec7b8928eae980fa0ee76117cb1faca213,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714351968851821563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7de719c6e2f195a14d3d4c0259020aab240910c358f41d8773e0336f409e8bd,PodSandboxId:041023e2e25ae7df006272a81327b58912d8e521b93736072469d38b31fd0820,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714351931384027928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b931b15-2309-4f6c-b17f-99598e00c2dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.338212317Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db88d705-272e-4aac-b2b0-2d84649ce219 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.338479195Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f4d5387b4775719e7cd996e95ec4489dd0eb8bcf637a5ce30c1be253991ad581,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sn9xc,Uid:48b4272a-1a80-45cf-a204-40298df52fce,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714351968735957114,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T00:52:10.628198648Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&PodSandboxMetadata{Name:etcd-pause-934652,Uid:edf94d85957615174e22ded817a97d9e,Namespace:kube-system,Attempt:1,
},State:SANDBOX_READY,CreatedAt:1714351968578080788,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.185:2379,kubernetes.io/config.hash: edf94d85957615174e22ded817a97d9e,kubernetes.io/config.seen: 2024-04-29T00:51:57.116071142Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-934652,Uid:55a24b50dd0cf5d551986832494ade71,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714351968557152662,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 55a24b50dd0cf5d551986832494ade71,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.185:8443,kubernetes.io/config.hash: 55a24b50dd0cf5d551986832494ade71,kubernetes.io/config.seen: 2024-04-29T00:51:57.116072440Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-934652,Uid:250db48c918b5f1a2893ba69b1006715,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714351968545780231,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 250db48c918b5f1a2893ba69b1006715,kubernetes.io/config.seen: 2024-04-29T00:51:57.116067700Z,kuberne
tes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-934652,Uid:a8618c73b4cf8d62c17731e4f0958049,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714351968527472293,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a8618c73b4cf8d62c17731e4f0958049,kubernetes.io/config.seen: 2024-04-29T00:51:57.116073382Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&PodSandboxMetadata{Name:kube-proxy-g5fvm,Uid:de3a2710-df1b-486c-b242-1ec7766c66f2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1714351968512630732,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T00:52:10.371003154Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=db88d705-272e-4aac-b2b0-2d84649ce219 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.339149002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22162d67-bcf2-4dc5-9249-792b648adda3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.339204717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22162d67-bcf2-4dc5-9249-792b648adda3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.339346999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0768f7987b68ad0166c25ea43311413dd18f1d3001148d22af706c6b250ed9b,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714351980608845170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050e34b859543c93bbefed8ffd30a58c269473bf62c25be4731ba46007961ad,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714351976790990042,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd78284c304a529d392da3b27e753525fb59685c89a4adb157b79021116fd2c,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714351976770866945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ad21ae6aa74cc69020b38c5aa3bbcd5c74b472024c07203922f187fa8ca03,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714351976800317461,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14466fd12b72f3ce0cf9d921820b83bac655b16cd332244a8e658c7befb6cd2f,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714351976766193705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b07f1b0420ebbb550dd294397e4b87ed9422391a7f647fe49fac5d725c80f06,PodSandboxId:f4d5387b4775719e7cd996e95ec4489dd0eb8bcf637a5ce30c1be253991ad581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714351969905186613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22162d67-bcf2-4dc5-9249-792b648adda3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.356152469Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71f0abe1-beef-41dd-a95d-bf107c208817 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.356215878Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71f0abe1-beef-41dd-a95d-bf107c208817 name=/runtime.v1.RuntimeService/Version
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.357585183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2181f6b7-37ee-4ca0-93a0-74ab86182bed name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.358051583Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714352002358029945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2181f6b7-37ee-4ca0-93a0-74ab86182bed name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.358719793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af1b26b0-e68f-4c15-bf27-0dbf2360b467 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.358768734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af1b26b0-e68f-4c15-bf27-0dbf2360b467 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 00:53:22 pause-934652 crio[2228]: time="2024-04-29 00:53:22.359111906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b0768f7987b68ad0166c25ea43311413dd18f1d3001148d22af706c6b250ed9b,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714351980608845170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050e34b859543c93bbefed8ffd30a58c269473bf62c25be4731ba46007961ad,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714351976790990042,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd78284c304a529d392da3b27e753525fb59685c89a4adb157b79021116fd2c,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714351976770866945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ad21ae6aa74cc69020b38c5aa3bbcd5c74b472024c07203922f187fa8ca03,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714351976800317461,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14466fd12b72f3ce0cf9d921820b83bac655b16cd332244a8e658c7befb6cd2f,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714351976766193705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b07f1b0420ebbb550dd294397e4b87ed9422391a7f647fe49fac5d725c80f06,PodSandboxId:f4d5387b4775719e7cd996e95ec4489dd0eb8bcf637a5ce30c1be253991ad581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714351969905186613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa42718ec838c8f49c9bb69e2af48e6bf6efeb011f98ff3457827da8bd7253b,PodSandboxId:d3d4a0c4e4712391d3644debac5d54794ee0682ab287c390bae3116feb41a01a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714351969035258219,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edf94d85957615174e22ded817a97d9e,},Annotations:map[string]string{io.kubernetes.container.hash: 9892f8f1,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd81162d7c15187ebcd522922c9a0361482152ebe1aa91a48d4de047f111d960,PodSandboxId:a53a0d436ddff16f739bf475bcca15c7eaa2732fffac330416fcbd288570ec57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714351968992849922,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8618c73b4cf8d62c17731e4f0958049,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d5067da606911f451c9c6457f376bcdfdc15ea378323268638502d69910e64,PodSandboxId:6c476d5d1ec84fa073242cd25693fb52ffe93de042a73e998be4ab6641e3dd64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714351969013854712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250db48c918b5f1a2893ba69b1006715,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b288866277fe5cd13cb7026e6618e1f6b9faaf8353c56b94d118408c7b6ade6,PodSandboxId:32b733ddc82544a85d56b1e6df3ae3a84911e3cb3a886ada471f75e25e2ed933,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714351969057346589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-934652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a24b50dd0cf5d551986832494ade71,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f23d939c32be0691533671c72fb6ec7b8928eae980fa0ee76117cb1faca213,PodSandboxId:d3b435dac5750344f106aa2f830c0f53672805b8ae947f41c046f4943e503d28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714351968851821563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5fvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3a2710-df1b-486c-b242-1ec7766c66f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4c1f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7de719c6e2f195a14d3d4c0259020aab240910c358f41d8773e0336f409e8bd,PodSandboxId:041023e2e25ae7df006272a81327b58912d8e521b93736072469d38b31fd0820,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714351931384027928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sn9xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b4272a-1a80-45cf-a204-40298df52fce,},Annotations:map[string]string{io.kubernetes.container.hash: aff0f32,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af1b26b0-e68f-4c15-bf27-0dbf2360b467 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b0768f7987b68       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   21 seconds ago       Running             kube-proxy                2                   d3b435dac5750       kube-proxy-g5fvm
	346ad21ae6aa7       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   25 seconds ago       Running             kube-controller-manager   2                   a53a0d436ddff       kube-controller-manager-pause-934652
	3050e34b85954       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   25 seconds ago       Running             kube-scheduler            2                   6c476d5d1ec84       kube-scheduler-pause-934652
	ddd78284c304a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago       Running             etcd                      2                   d3d4a0c4e4712       etcd-pause-934652
	14466fd12b72f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   25 seconds ago       Running             kube-apiserver            2                   32b733ddc8254       kube-apiserver-pause-934652
	0b07f1b0420eb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   32 seconds ago       Running             coredns                   1                   f4d5387b47757       coredns-7db6d8ff4d-sn9xc
	3b288866277fe       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   33 seconds ago       Exited              kube-apiserver            1                   32b733ddc8254       kube-apiserver-pause-934652
	7fa42718ec838       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   33 seconds ago       Exited              etcd                      1                   d3d4a0c4e4712       etcd-pause-934652
	e1d5067da6069       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   33 seconds ago       Exited              kube-scheduler            1                   6c476d5d1ec84       kube-scheduler-pause-934652
	bd81162d7c151       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   33 seconds ago       Exited              kube-controller-manager   1                   a53a0d436ddff       kube-controller-manager-pause-934652
	51f23d939c32b       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   33 seconds ago       Exited              kube-proxy                1                   d3b435dac5750       kube-proxy-g5fvm
	c7de719c6e2f1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   041023e2e25ae       coredns-7db6d8ff4d-sn9xc
	
	
	==> coredns [0b07f1b0420ebbb550dd294397e4b87ed9422391a7f647fe49fac5d725c80f06] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46675 - 28533 "HINFO IN 3722021531145092725.8522558893245719435. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01341867s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [c7de719c6e2f195a14d3d4c0259020aab240910c358f41d8773e0336f409e8bd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47656 - 59016 "HINFO IN 2795413329019050417.1007320012138280563. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014034528s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-934652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-934652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5aea53309587d5dad960702a78dfdd5fb48b1328
	                    minikube.k8s.io/name=pause-934652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T00_51_57_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 00:51:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-934652
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 00:53:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 00:53:00 +0000   Mon, 29 Apr 2024 00:51:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 00:53:00 +0000   Mon, 29 Apr 2024 00:51:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 00:53:00 +0000   Mon, 29 Apr 2024 00:51:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 00:53:00 +0000   Mon, 29 Apr 2024 00:51:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    pause-934652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 4289a3a00fa14b86b0bee51df6af8ccd
	  System UUID:                4289a3a0-0fa1-4b86-b0be-e51df6af8ccd
	  Boot ID:                    3931d743-0895-4dbd-ab09-be4f028480db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-sn9xc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     72s
	  kube-system                 etcd-pause-934652                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         85s
	  kube-system                 kube-apiserver-pause-934652             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-controller-manager-pause-934652    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-proxy-g5fvm                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-scheduler-pause-934652             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 71s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientPID     85s                kubelet          Node pause-934652 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  85s                kubelet          Node pause-934652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s                kubelet          Node pause-934652 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeReady                84s                kubelet          Node pause-934652 status is now: NodeReady
	  Normal  RegisteredNode           73s                node-controller  Node pause-934652 event: Registered Node pause-934652 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-934652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-934652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-934652 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-934652 event: Registered Node pause-934652 in Controller
	
	
	==> dmesg <==
	[  +0.073163] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074118] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.169348] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.170872] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.329271] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +5.221670] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.065379] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.198553] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +1.079562] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.978497] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +0.087612] kauditd_printk_skb: 41 callbacks suppressed
	[Apr29 00:52] systemd-fstab-generator[1495]: Ignoring "noauto" option for root device
	[  +0.169543] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.954068] kauditd_printk_skb: 69 callbacks suppressed
	[ +18.936312] systemd-fstab-generator[2147]: Ignoring "noauto" option for root device
	[  +0.153373] systemd-fstab-generator[2160]: Ignoring "noauto" option for root device
	[  +0.190372] systemd-fstab-generator[2174]: Ignoring "noauto" option for root device
	[  +0.169937] systemd-fstab-generator[2186]: Ignoring "noauto" option for root device
	[  +0.356554] systemd-fstab-generator[2214]: Ignoring "noauto" option for root device
	[  +7.233804] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +0.085482] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.557881] kauditd_printk_skb: 85 callbacks suppressed
	[  +2.343177] systemd-fstab-generator[3085]: Ignoring "noauto" option for root device
	[  +4.689386] kauditd_printk_skb: 42 callbacks suppressed
	[Apr29 00:53] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	
	
	==> etcd [7fa42718ec838c8f49c9bb69e2af48e6bf6efeb011f98ff3457827da8bd7253b] <==
	{"level":"info","ts":"2024-04-29T00:52:50.094336Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:52:51.431614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T00:52:51.431676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T00:52:51.431713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2024-04-29T00:52:51.431739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:51.431745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:51.431756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:51.431763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:51.434122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:52:51.436541Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:pause-934652 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T00:52:51.436743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:52:51.437061Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:52:51.437149Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T00:52:51.436777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	{"level":"info","ts":"2024-04-29T00:52:51.438715Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T00:52:53.207446Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-29T00:52:53.207543Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-934652","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	{"level":"warn","ts":"2024-04-29T00:52:53.207649Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:52:53.207745Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:52:53.209304Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T00:52:53.209328Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T00:52:53.210776Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8fbc2df34e14192d","current-leader-member-id":"8fbc2df34e14192d"}
	{"level":"info","ts":"2024-04-29T00:52:53.214667Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-29T00:52:53.214744Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-29T00:52:53.214767Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-934652","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	
	
	==> etcd [ddd78284c304a529d392da3b27e753525fb59685c89a4adb157b79021116fd2c] <==
	{"level":"info","ts":"2024-04-29T00:52:57.23238Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:52:57.252538Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T00:52:57.257657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d switched to configuration voters=(10357203766055541037)"}
	{"level":"info","ts":"2024-04-29T00:52:57.257769Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","added-peer-id":"8fbc2df34e14192d","added-peer-peer-urls":["https://192.168.39.185:2380"]}
	{"level":"info","ts":"2024-04-29T00:52:57.257876Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:52:57.257916Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T00:52:57.260942Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T00:52:57.261153Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8fbc2df34e14192d","initial-advertise-peer-urls":["https://192.168.39.185:2380"],"listen-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.185:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T00:52:57.261206Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T00:52:57.261322Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-29T00:52:57.261354Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-29T00:52:58.284492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:58.284579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:58.284609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-04-29T00:52:58.284621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 4"}
	{"level":"info","ts":"2024-04-29T00:52:58.284626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 4"}
	{"level":"info","ts":"2024-04-29T00:52:58.284634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 4"}
	{"level":"info","ts":"2024-04-29T00:52:58.284641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 4"}
	{"level":"info","ts":"2024-04-29T00:52:58.295652Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:pause-934652 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T00:52:58.295789Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:52:58.299758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T00:52:58.325461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T00:52:58.330055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	{"level":"info","ts":"2024-04-29T00:52:58.333476Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T00:52:58.333545Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:53:22 up 2 min,  0 users,  load average: 0.91, 0.32, 0.12
	Linux pause-934652 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14466fd12b72f3ce0cf9d921820b83bac655b16cd332244a8e658c7befb6cd2f] <==
	I0429 00:52:59.880859       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0429 00:52:59.880884       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 00:52:59.881302       1 aggregator.go:165] initial CRD sync complete...
	I0429 00:52:59.881346       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 00:52:59.881370       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 00:52:59.881870       1 cache.go:39] Caches are synced for autoregister controller
	I0429 00:52:59.911450       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 00:52:59.944078       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 00:53:00.004729       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 00:53:00.005520       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 00:53:00.004820       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 00:53:00.006070       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 00:53:00.006778       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 00:53:00.021476       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 00:53:00.028166       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 00:53:00.028188       1 policy_source.go:224] refreshing policies
	I0429 00:53:00.049130       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 00:53:00.821135       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 00:53:01.511784       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 00:53:01.529859       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 00:53:01.588536       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 00:53:01.623990       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 00:53:01.631738       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 00:53:12.191759       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 00:53:12.326614       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [3b288866277fe5cd13cb7026e6618e1f6b9faaf8353c56b94d118408c7b6ade6] <==
	I0429 00:52:52.906368       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 00:52:52.906518       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 00:52:52.906561       1 crd_finalizer.go:270] Shutting down CRDFinalizer
	I0429 00:52:52.906580       1 apiapproval_controller.go:190] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0429 00:52:52.906597       1 nonstructuralschema_controller.go:196] Shutting down NonStructuralSchemaConditionController
	I0429 00:52:52.906607       1 establishing_controller.go:80] Shutting down EstablishingController
	I0429 00:52:52.906625       1 naming_controller.go:295] Shutting down NamingConditionController
	E0429 00:52:52.906636       1 controller.go:92] timed out waiting for caches to sync
	E0429 00:52:52.906649       1 controller.go:145] timed out waiting for caches to sync
	E0429 00:52:52.906691       1 shared_informer.go:316] unable to sync caches for crd-autoregister
	F0429 00:52:52.906705       1 hooks.go:203] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	E0429 00:52:52.970142       1 shared_informer.go:316] unable to sync caches for configmaps
	I0429 00:52:52.970193       1 controller.go:121] Shutting down legacy_token_tracking_controller
	E0429 00:52:52.970213       1 shared_informer.go:316] unable to sync caches for cluster_authentication_trust_controller
	E0429 00:52:52.970225       1 customresource_discovery_controller.go:292] timed out waiting for caches to sync
	I0429 00:52:52.970266       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0429 00:52:52.970280       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	F0429 00:52:52.970288       1 hooks.go:203] PostStartHook "crd-informer-synced" failed: timed out waiting for the condition
	E0429 00:52:53.043249       1 gc_controller.go:84] timed out waiting for caches to sync
	I0429 00:52:53.043312       1 gc_controller.go:85] Shutting down apiserver lease garbage collector
	I0429 00:52:53.043621       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0429 00:52:53.043740       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0429 00:52:53.044571       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0429 00:52:53.047610       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0429 00:52:53.051554       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [346ad21ae6aa74cc69020b38c5aa3bbcd5c74b472024c07203922f187fa8ca03] <==
	I0429 00:53:12.203931       1 shared_informer.go:320] Caches are synced for cronjob
	I0429 00:53:12.204146       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 00:53:12.208095       1 shared_informer.go:320] Caches are synced for PVC protection
	I0429 00:53:12.211031       1 shared_informer.go:320] Caches are synced for taint
	I0429 00:53:12.211191       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 00:53:12.211273       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-934652"
	I0429 00:53:12.211505       1 shared_informer.go:320] Caches are synced for crt configmap
	I0429 00:53:12.214123       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0429 00:53:12.211384       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0429 00:53:12.219037       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0429 00:53:12.219196       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0429 00:53:12.219371       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0429 00:53:12.219737       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0429 00:53:12.219901       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0429 00:53:12.224727       1 shared_informer.go:320] Caches are synced for TTL
	I0429 00:53:12.227785       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0429 00:53:12.301745       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0429 00:53:12.314618       1 shared_informer.go:320] Caches are synced for endpoint
	I0429 00:53:12.372263       1 shared_informer.go:320] Caches are synced for disruption
	I0429 00:53:12.414431       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 00:53:12.418069       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 00:53:12.443787       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 00:53:12.855092       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 00:53:12.855142       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 00:53:12.858053       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [bd81162d7c15187ebcd522922c9a0361482152ebe1aa91a48d4de047f111d960] <==
	I0429 00:52:50.605668       1 serving.go:380] Generated self-signed cert in-memory
	I0429 00:52:50.853766       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0429 00:52:50.853846       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:52:50.855892       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0429 00:52:50.857753       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 00:52:50.857788       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 00:52:50.857798       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [51f23d939c32be0691533671c72fb6ec7b8928eae980fa0ee76117cb1faca213] <==
	I0429 00:52:50.507304       1 server_linux.go:69] "Using iptables proxy"
	E0429 00:52:54.071564       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-934652\": dial tcp 192.168.39.185:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.185:35714->192.168.39.185:8443: read: connection reset by peer"
	
	
	==> kube-proxy [b0768f7987b68ad0166c25ea43311413dd18f1d3001148d22af706c6b250ed9b] <==
	I0429 00:53:00.814373       1 server_linux.go:69] "Using iptables proxy"
	I0429 00:53:00.845916       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	I0429 00:53:00.924824       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 00:53:00.924884       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 00:53:00.924902       1 server_linux.go:165] "Using iptables Proxier"
	I0429 00:53:00.932603       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 00:53:00.932843       1 server.go:872] "Version info" version="v1.30.0"
	I0429 00:53:00.932899       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:53:00.934301       1 config.go:192] "Starting service config controller"
	I0429 00:53:00.934356       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 00:53:00.934454       1 config.go:101] "Starting endpoint slice config controller"
	I0429 00:53:00.934461       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 00:53:00.935058       1 config.go:319] "Starting node config controller"
	I0429 00:53:00.935127       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 00:53:01.035464       1 shared_informer.go:320] Caches are synced for node config
	I0429 00:53:01.035604       1 shared_informer.go:320] Caches are synced for service config
	I0429 00:53:01.035727       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3050e34b859543c93bbefed8ffd30a58c269473bf62c25be4731ba46007961ad] <==
	W0429 00:52:59.932213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 00:52:59.932250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 00:52:59.932481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 00:52:59.932524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 00:52:59.932579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 00:52:59.932616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 00:52:59.932788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 00:52:59.932827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 00:52:59.932933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 00:52:59.932970       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 00:52:59.933032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 00:52:59.933067       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 00:52:59.933154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 00:52:59.933190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 00:52:59.933246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 00:52:59.933254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 00:52:59.933291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 00:52:59.933327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 00:52:59.935477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 00:52:59.935528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 00:52:59.935588       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 00:52:59.935652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 00:52:59.935736       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 00:52:59.935773       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0429 00:53:01.301460       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e1d5067da606911f451c9c6457f376bcdfdc15ea378323268638502d69910e64] <==
	I0429 00:52:50.939843       1 serving.go:380] Generated self-signed cert in-memory
	W0429 00:52:54.065096       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.39.185:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.185:8443: connect: connection refused - error from a previous attempt: EOF
	W0429 00:52:54.065121       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 00:52:54.065128       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 00:52:54.079351       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 00:52:54.079520       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 00:52:54.081641       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 00:52:54.081791       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0429 00:52:54.082298       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0429 00:52:54.082801       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.500832    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8618c73b4cf8d62c17731e4f0958049-k8s-certs\") pod \"kube-controller-manager-pause-934652\" (UID: \"a8618c73b4cf8d62c17731e4f0958049\") " pod="kube-system/kube-controller-manager-pause-934652"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.500858    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8618c73b4cf8d62c17731e4f0958049-kubeconfig\") pod \"kube-controller-manager-pause-934652\" (UID: \"a8618c73b4cf8d62c17731e4f0958049\") " pod="kube-system/kube-controller-manager-pause-934652"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.500874    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/250db48c918b5f1a2893ba69b1006715-kubeconfig\") pod \"kube-scheduler-pause-934652\" (UID: \"250db48c918b5f1a2893ba69b1006715\") " pod="kube-system/kube-scheduler-pause-934652"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: E0429 00:52:56.503381    3092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-934652?timeout=10s\": dial tcp 192.168.39.185:8443: connect: connection refused" interval="400ms"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.598522    3092 kubelet_node_status.go:73] "Attempting to register node" node="pause-934652"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: E0429 00:52:56.599555    3092 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.185:8443: connect: connection refused" node="pause-934652"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.741359    3092 scope.go:117] "RemoveContainer" containerID="7fa42718ec838c8f49c9bb69e2af48e6bf6efeb011f98ff3457827da8bd7253b"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.743282    3092 scope.go:117] "RemoveContainer" containerID="3b288866277fe5cd13cb7026e6618e1f6b9faaf8353c56b94d118408c7b6ade6"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.745111    3092 scope.go:117] "RemoveContainer" containerID="bd81162d7c15187ebcd522922c9a0361482152ebe1aa91a48d4de047f111d960"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: I0429 00:52:56.751683    3092 scope.go:117] "RemoveContainer" containerID="e1d5067da606911f451c9c6457f376bcdfdc15ea378323268638502d69910e64"
	Apr 29 00:52:56 pause-934652 kubelet[3092]: E0429 00:52:56.905016    3092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-934652?timeout=10s\": dial tcp 192.168.39.185:8443: connect: connection refused" interval="800ms"
	Apr 29 00:52:57 pause-934652 kubelet[3092]: I0429 00:52:57.003679    3092 kubelet_node_status.go:73] "Attempting to register node" node="pause-934652"
	Apr 29 00:52:57 pause-934652 kubelet[3092]: E0429 00:52:57.004623    3092 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.185:8443: connect: connection refused" node="pause-934652"
	Apr 29 00:52:57 pause-934652 kubelet[3092]: I0429 00:52:57.807750    3092 kubelet_node_status.go:73] "Attempting to register node" node="pause-934652"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.101838    3092 kubelet_node_status.go:112] "Node was previously registered" node="pause-934652"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.102317    3092 kubelet_node_status.go:76] "Successfully registered node" node="pause-934652"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.103919    3092 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.104860    3092 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.277702    3092 apiserver.go:52] "Watching apiserver"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.280827    3092 topology_manager.go:215] "Topology Admit Handler" podUID="48b4272a-1a80-45cf-a204-40298df52fce" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sn9xc"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.282183    3092 topology_manager.go:215] "Topology Admit Handler" podUID="de3a2710-df1b-486c-b242-1ec7766c66f2" podNamespace="kube-system" podName="kube-proxy-g5fvm"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.293125    3092 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.332353    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de3a2710-df1b-486c-b242-1ec7766c66f2-xtables-lock\") pod \"kube-proxy-g5fvm\" (UID: \"de3a2710-df1b-486c-b242-1ec7766c66f2\") " pod="kube-system/kube-proxy-g5fvm"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.333089    3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de3a2710-df1b-486c-b242-1ec7766c66f2-lib-modules\") pod \"kube-proxy-g5fvm\" (UID: \"de3a2710-df1b-486c-b242-1ec7766c66f2\") " pod="kube-system/kube-proxy-g5fvm"
	Apr 29 00:53:00 pause-934652 kubelet[3092]: I0429 00:53:00.583878    3092 scope.go:117] "RemoveContainer" containerID="51f23d939c32be0691533671c72fb6ec7b8928eae980fa0ee76117cb1faca213"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-934652 -n pause-934652
helpers_test.go:261: (dbg) Run:  kubectl --context pause-934652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (69.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7200.066s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-873836 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 01:00:48.628558   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 01:05:48.628969   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (16m25s)
	TestStartStop (16m25s)
	TestStartStop/group/default-k8s-diff-port (10m47s)
	TestStartStop/group/default-k8s-diff-port/serial (10m47s)
	TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6m21s)
	TestStartStop/group/embed-certs (11m49s)
	TestStartStop/group/embed-certs/serial (11m49s)
	TestStartStop/group/embed-certs/serial/SecondStart (8m4s)
	TestStartStop/group/no-preload (13m21s)
	TestStartStop/group/no-preload/serial (13m21s)
	TestStartStop/group/no-preload/serial/SecondStart (8m58s)
	TestStartStop/group/old-k8s-version (13m45s)
	TestStartStop/group/old-k8s-version/serial (13m45s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (6m59s)

                                                
                                                
goroutine 2597 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 11 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0000e6b60, 0xc000859bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0000122b8, {0x4955920, 0x2b, 0x2b}, {0x26ad61f?, 0xc000973200?, 0x4a11cc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0008d0d20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0008d0d20)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0005fb480)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2545 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a0a840, 0xc0022a8ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2542
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 24 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 23
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 558 [select, 71 minutes]:
net/http.(*persistConn).writeLoop(0xc0029be360)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 555
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 1952 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc00052d310)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021a2ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021a2ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0021a2ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0021a2ea0, 0xc00050b200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1890
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2329 [chan receive, 9 minutes]:
testing.(*T).Run(0xc002166680, {0x2660530?, 0x60400000004?}, 0xc00099aa00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc002166680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc002166680, 0xc00050b280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1839
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2390 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0029b64c0, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2321
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2428 [chan receive, 6 minutes]:
testing.(*T).Run(0xc000749380, {0x2660530?, 0x60400000004?}, 0xc0027ca000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000749380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000749380, 0xc0005fb500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1838
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2544 [IO wait]:
internal/poll.runtime_pollWait(0x7f27487af6e0, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00239eea0?, 0xc00258a610?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00239eea0, {0xc00258a610, 0x2b9f0, 0x2b9f0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0031ea1a0, {0xc00258a610?, 0xc000508d30?, 0x3ff14?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0022a29c0, {0x3619560, 0xc002480440})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36196a0, 0xc0022a29c0}, {0x3619560, 0xc002480440}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0031ea1a0?, {0x36196a0, 0xc0022a29c0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0031ea1a0, {0x36196a0, 0xc0022a29c0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36196a0, 0xc0022a29c0}, {0x36195c0, 0xc0031ea1a0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022a8180?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2542
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1700 [chan receive, 16 minutes]:
testing.(*T).Run(0xc0024964e0, {0x2653246?, 0x55249c?}, 0xc0020e2270)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0024964e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0024964e0, 0x30c0020)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2494 [syscall, 9 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x114e2, 0xc0020aaab0, 0x1000004, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc002ed8db0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc002ed8db0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0023f0dc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0023f0dc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0028169c0, 0xc0023f0dc0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x363e780, 0xc0003be620}, 0xc0028169c0, {0xc0026e4198, 0x11}, {0x0?, 0xc000509760?}, {0x552353?, 0x4a26cf?}, {0xc0008e2600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0028169c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0028169c0, 0xc00099aa00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2329
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1838 [chan receive, 11 minutes]:
testing.(*T).Run(0xc0024976c0, {0x26547d3?, 0x0?}, 0xc0005fb500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0024976c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0024976c0, 0xc00261e180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1835
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1837 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc00052d310)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002497520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002497520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002497520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002497520, 0xc00261e140)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1835
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 379 [chan send, 71 minutes]:
os/exec.(*Cmd).watchCtx(0xc000bf8580, 0xc000061020)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 339
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 557 [select, 71 minutes]:
net/http.(*persistConn).readLoop(0xc0029be360)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 555
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 214 [IO wait, 77 minutes]:
internal/poll.runtime_pollWait(0x7f27487af9c8, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0006f8780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0006f8780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0006fe040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0006fe040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007ec0f0, {0x3631760, 0xc0006fe040})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0007ec0f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0021a21a0?, 0xc0021a2680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 211
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 372 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 371
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1890 [chan receive, 16 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc002497d40, 0xc0020e2270)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1700
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2595 [IO wait]:
internal/poll.runtime_pollWait(0x7f27482a1010, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002bd4240?, 0xc00221653f?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002bd4240, {0xc00221653f, 0x3ac1, 0x3ac1})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0031ea058, {0xc00221653f?, 0x21a0020?, 0xfeea?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000ce2180, {0x3619560, 0xc00098c060})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36196a0, 0xc000ce2180}, {0x3619560, 0xc00098c060}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0031ea058?, {0x36196a0, 0xc000ce2180})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0031ea058, {0x36196a0, 0xc000ce2180})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36196a0, 0xc000ce2180}, {0x36195c0, 0xc0031ea058}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0027ca000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2513
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 328 [chan send, 71 minutes]:
os/exec.(*Cmd).watchCtx(0xc00093ba20, 0xc000950480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 327
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 370 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000c96550, 0x21)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000985560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c96580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c06220, {0x361aac0, 0xc00096a120}, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000c06220, 0x3b9aca00, 0x0, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 287
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 287 [chan receive, 71 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c96580, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 348
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2488 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2487
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1839 [chan receive, 13 minutes]:
testing.(*T).Run(0xc002497860, {0x26547d3?, 0x0?}, 0xc00050b280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002497860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc002497860, 0xc00261e1c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1835
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 371 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e940, 0xc00010e240}, 0xc002101f50, 0xc00391ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e940, 0xc00010e240}, 0x20?, 0xc002101f50, 0xc002101f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e940?, 0xc00010e240?}, 0xc0021a2680?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594005?, 0xc000194000?, 0xc0006eca20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 287
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 286 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000985680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 348
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1951 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc00052d310)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021a2d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021a2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0021a2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0021a2d00, 0xc00050b180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1890
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2513 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x119ad, 0xc0020abab0, 0x1000004, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc00263a150)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc00263a150)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000bf8160)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000bf8160)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0028161a0, 0xc000bf8160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x363e780, 0xc000175490}, 0xc0028161a0, {0xc0026ab1a0, 0x1c}, {0x0?, 0xc000508f60?}, {0x552353?, 0x4a26cf?}, {0xc002234a00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0028161a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0028161a0, 0xc0027ca000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2428
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2543 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7f27487af3f8, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00239ede0?, 0xc0028002dd?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00239ede0, {0xc0028002dd, 0x523, 0x523})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0031ea188, {0xc0028002dd?, 0xc0023eb530?, 0x208?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0022a2990, {0x3619560, 0xc000c600f8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36196a0, 0xc0022a2990}, {0x3619560, 0xc000c600f8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0031ea188?, {0x36196a0, 0xc0022a2990})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0031ea188, {0x36196a0, 0xc0022a2990})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36196a0, 0xc0022a2990}, {0x36195c0, 0xc0031ea188}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022a8c60?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2542
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1950 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc00052d310)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021a2b60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021a2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0021a2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0021a2b60, 0xc00050b100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1890
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2594 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7f27487af300, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002bd4060?, 0xc002800ac9?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002bd4060, {0xc002800ac9, 0x537, 0x537})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0031ea040, {0xc002800ac9?, 0xc00229ed30?, 0x213?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000ce2150, {0x3619560, 0xc002480188})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36196a0, 0xc000ce2150}, {0x3619560, 0xc002480188}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0031ea040?, {0x36196a0, 0xc000ce2150})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0031ea040, {0x36196a0, 0xc000ce2150})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36196a0, 0xc000ce2150}, {0x36195c0, 0xc0031ea040}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022a83c0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2513
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2519 [IO wait]:
internal/poll.runtime_pollWait(0x7f27487af5e8, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00279c180?, 0xc002124558?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00279c180, {0xc002124558, 0x3aa8, 0x3aa8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000c60060, {0xc002124558?, 0xc0023ef530?, 0xfe03?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0029041e0, {0x3619560, 0xc002480358})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36196a0, 0xc0029041e0}, {0x3619560, 0xc002480358}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000c60060?, {0x36196a0, 0xc0029041e0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000c60060, {0x36196a0, 0xc0029041e0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36196a0, 0xc0029041e0}, {0x36195c0, 0xc000c60060}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002708360?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2517
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1949 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc00052d310)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021a2820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021a2820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0021a2820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0021a2820, 0xc00050b080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1890
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1984 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc00052d310)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000e7520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000e7520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000e7520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000e7520, 0xc000a14880)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1890
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2486 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00092ac90, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0024b8de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00092b0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008eec40, {0x361aac0, 0xc0020e6630}, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008eec40, 0x3b9aca00, 0x0, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2469
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1836 [chan receive, 13 minutes]:
testing.(*T).Run(0xc002496ea0, {0x26547d3?, 0x0?}, 0xc00099a080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002496ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc002496ea0, 0xc00261e100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1835
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2359 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2358
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2339 [chan receive, 8 minutes]:
testing.(*T).Run(0xc0021669c0, {0x2660530?, 0x60400000004?}, 0xc00099a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0021669c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0021669c0, 0xc000a14080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1841
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 494 [chan send, 71 minutes]:
os/exec.(*Cmd).watchCtx(0xc00282f600, 0xc002709bc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 493
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2469 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00092b0c0, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2438
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2487 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e940, 0xc00010e240}, 0xc0023ebf50, 0xc0000acf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e940, 0xc00010e240}, 0xc0?, 0xc0023ebf50, 0xc0023ebf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e940?, 0xc00010e240?}, 0xc0023ebfb0?, 0x99de58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594005?, 0xc000a0a6e0?, 0xc0027086c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2469
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2389 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002bd4d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2321
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1841 [chan receive, 11 minutes]:
testing.(*T).Run(0xc002497ba0, {0x26547d3?, 0x0?}, 0xc000a14080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002497ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc002497ba0, 0xc00261e280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1835
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2517 [syscall, 8 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x11716, 0xc002351ab0, 0x1000004, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc002d5a240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc002d5a240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000698580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000698580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002166000, 0xc000698580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x363e780, 0xc0004c2000}, 0xc002166000, {0xc0024b6030, 0x12}, {0x0?, 0xc0023eaf60?}, {0x552353?, 0x4a26cf?}, {0xc0008e2700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002166000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002166000, 0xc00099a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2339
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2468 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0024b8f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2438
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2307 [chan receive, 8 minutes]:
testing.(*T).Run(0xc002816000, {0x2660530?, 0x60400000004?}, 0xc0001aa900)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc002816000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc002816000, 0xc00099a080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1836
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2497 [select, 9 minutes]:
os/exec.(*Cmd).watchCtx(0xc0023f0dc0, 0xc0022a9080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2494
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2495 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f27487af110, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002bd4120?, 0xc002750ad4?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002bd4120, {0xc002750ad4, 0x52c, 0x52c})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002480330, {0xc002750ad4?, 0x21a0020?, 0x229?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0020e7440, {0x3619560, 0xc00098c2a8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36196a0, 0xc0020e7440}, {0x3619560, 0xc00098c2a8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002480330?, {0x36196a0, 0xc0020e7440})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002480330, {0x36196a0, 0xc0020e7440})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36196a0, 0xc0020e7440}, {0x36195c0, 0xc002480330}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00099aa00?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2494
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1835 [chan receive, 16 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc002496820, 0x30c0240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1766
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2518 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7f27487afbb8, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00279c0c0?, 0xc0028012a1?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00279c0c0, {0xc0028012a1, 0x55f, 0x55f})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000c60048, {0xc0028012a1?, 0x21a0020?, 0x230?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0029041b0, {0x3619560, 0xc0031ea0f0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36196a0, 0xc0029041b0}, {0x3619560, 0xc0031ea0f0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000c60048?, {0x36196a0, 0xc0029041b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000c60048, {0x36196a0, 0xc0029041b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36196a0, 0xc0029041b0}, {0x36195c0, 0xc000c60048}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00099a000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2517
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1766 [chan receive, 16 minutes]:
testing.(*T).Run(0xc002496b60, {0x2653246?, 0x552353?}, 0x30c0240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc002496b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc002496b60, 0x30c0068)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1983 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc00052d310)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000e71e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000e71e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000e71e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000e71e0, 0xc000a14800)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1890
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1891 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc00052d310)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021664e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021664e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0021664e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0021664e0, 0xc0006f8a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1890
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2542 [syscall, 8 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x1189e, 0xc0000a9ab0, 0x1000004, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc002e28990)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc002e28990)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a0a840)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000a0a840)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002816b60, 0xc000a0a840)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x363e780, 0xc00042a000}, 0xc002816b60, {0xc002642030, 0x16}, {0x0?, 0xc002a5e760?}, {0x552353?, 0x4a26cf?}, {0xc00218c000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002816b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002816b60, 0xc0001aa900)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2307
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2520 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc000698580, 0xc002808120)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2517
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2496 [IO wait]:
internal/poll.runtime_pollWait(0x7f27487af4f0, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002bd41e0?, 0xc0023c6132?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002bd41e0, {0xc0023c6132, 0x19ece, 0x19ece})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002480368, {0xc0023c6132?, 0x0?, 0x20000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0020e7470, {0x3619560, 0xc00098c2b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36196a0, 0xc0020e7470}, {0x3619560, 0xc00098c2b0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002480368?, {0x36196a0, 0xc0020e7470})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002480368, {0x36196a0, 0xc0020e7470})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x9c
io.copyBuffer({0x36196a0, 0xc0020e7470}, {0x36195c0, 0xc002480368}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002808f00?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2494
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2357 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0029b6490, 0x2)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2145720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002bd4c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0029b64c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cfa050, {0x361aac0, 0xc00266e030}, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000cfa050, 0x3b9aca00, 0x0, 0x1, 0xc00010e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2390
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2358 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e940, 0xc00010e240}, 0xc0023e9f50, 0xc0020b2f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e940, 0xc00010e240}, 0x20?, 0xc0023e9f50, 0xc0023e9f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e940?, 0xc00010e240?}, 0x6dc5da?, 0x7b9db8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0023e9fd0?, 0x594064?, 0xc0006ec720?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2390
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2596 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc000bf8160, 0xc0027082a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2513
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                    

Test pass (164/207)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 53.2
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0/json-events 12.69
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.14
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.58
22 TestOffline 89.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
28 TestCertOptions 47.34
29 TestCertExpiration 321.27
31 TestForceSystemdFlag 101.32
32 TestForceSystemdEnv 47.35
34 TestKVMDriverInstallOrUpdate 4.46
38 TestErrorSpam/setup 43.07
39 TestErrorSpam/start 0.37
40 TestErrorSpam/status 0.8
41 TestErrorSpam/pause 1.69
42 TestErrorSpam/unpause 1.75
43 TestErrorSpam/stop 4.5
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 61.87
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 34.74
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.08
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
55 TestFunctional/serial/CacheCmd/cache/add_local 2.43
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
57 TestFunctional/serial/CacheCmd/cache/list 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
60 TestFunctional/serial/CacheCmd/cache/delete 0.12
61 TestFunctional/serial/MinikubeKubectlCmd 0.11
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
63 TestFunctional/serial/ExtraConfig 285.56
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 1.29
66 TestFunctional/serial/LogsFileCmd 1.3
67 TestFunctional/serial/InvalidService 4.56
69 TestFunctional/parallel/ConfigCmd 0.41
70 TestFunctional/parallel/DashboardCmd 17.16
71 TestFunctional/parallel/DryRun 0.32
72 TestFunctional/parallel/InternationalLanguage 0.16
73 TestFunctional/parallel/StatusCmd 0.9
77 TestFunctional/parallel/ServiceCmdConnect 10.6
78 TestFunctional/parallel/AddonsCmd 0.15
79 TestFunctional/parallel/PersistentVolumeClaim 50.95
81 TestFunctional/parallel/SSHCmd 0.52
82 TestFunctional/parallel/CpCmd 1.37
83 TestFunctional/parallel/MySQL 35.2
84 TestFunctional/parallel/FileSync 0.26
85 TestFunctional/parallel/CertSync 1.5
89 TestFunctional/parallel/NodeLabels 0.1
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
93 TestFunctional/parallel/License 0.67
94 TestFunctional/parallel/MountCmd/any-port 11.37
95 TestFunctional/parallel/MountCmd/specific-port 2.14
96 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
97 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.41
98 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
99 TestFunctional/parallel/ServiceCmd/DeployApp 9.26
100 TestFunctional/parallel/MountCmd/VerifyCleanup 1.85
101 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
102 TestFunctional/parallel/ProfileCmd/profile_list 0.35
103 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
104 TestFunctional/parallel/Version/short 0.06
105 TestFunctional/parallel/Version/components 0.91
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
110 TestFunctional/parallel/ImageCommands/ImageBuild 3.9
111 TestFunctional/parallel/ImageCommands/Setup 2.03
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 7.58
113 TestFunctional/parallel/ServiceCmd/List 1.37
114 TestFunctional/parallel/ServiceCmd/JSONOutput 1.34
115 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
116 TestFunctional/parallel/ServiceCmd/Format 0.38
117 TestFunctional/parallel/ServiceCmd/URL 0.4
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.08
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 12.92
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.43
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.85
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.35
133 TestFunctional/delete_addon-resizer_images 0.07
134 TestFunctional/delete_my-image_image 0.02
135 TestFunctional/delete_minikube_cached_images 0.02
139 TestMultiControlPlane/serial/StartCluster 215.39
140 TestMultiControlPlane/serial/DeployApp 7.13
141 TestMultiControlPlane/serial/PingHostFromPods 1.41
142 TestMultiControlPlane/serial/AddWorkerNode 48.51
143 TestMultiControlPlane/serial/NodeLabels 0.06
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.58
145 TestMultiControlPlane/serial/CopyFile 13.6
147 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.51
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
151 TestMultiControlPlane/serial/DeleteSecondaryNode 17.56
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
154 TestMultiControlPlane/serial/RestartCluster 348.06
155 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
156 TestMultiControlPlane/serial/AddSecondaryNode 78.69
157 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
161 TestJSONOutput/start/Command 60.29
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.77
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.66
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 8.4
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.21
189 TestMainNoArgs 0.05
190 TestMinikubeProfile 94.5
193 TestMountStart/serial/StartWithMountFirst 27.2
194 TestMountStart/serial/VerifyMountFirst 0.39
195 TestMountStart/serial/StartWithMountSecond 28.48
196 TestMountStart/serial/VerifyMountSecond 0.38
197 TestMountStart/serial/DeleteFirst 0.69
198 TestMountStart/serial/VerifyMountPostDelete 0.38
199 TestMountStart/serial/Stop 1.73
200 TestMountStart/serial/RestartStopped 24.94
201 TestMountStart/serial/VerifyMountPostStop 0.4
204 TestMultiNode/serial/FreshStart2Nodes 134.26
205 TestMultiNode/serial/DeployApp2Nodes 6.63
206 TestMultiNode/serial/PingHostFrom2Pods 0.87
207 TestMultiNode/serial/AddNode 45.17
208 TestMultiNode/serial/MultiNodeLabels 0.06
209 TestMultiNode/serial/ProfileList 0.23
210 TestMultiNode/serial/CopyFile 7.49
211 TestMultiNode/serial/StopNode 3.16
212 TestMultiNode/serial/StartAfterStop 30.56
214 TestMultiNode/serial/DeleteNode 2.41
216 TestMultiNode/serial/RestartMultiNode 183.33
217 TestMultiNode/serial/ValidateNameConflict 49.05
224 TestScheduledStopUnix 115.57
228 TestRunningBinaryUpgrade 224.33
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
234 TestNoKubernetes/serial/StartWithK8s 94.74
235 TestStoppedBinaryUpgrade/Setup 2.62
236 TestStoppedBinaryUpgrade/Upgrade 146.96
237 TestNoKubernetes/serial/StartWithStopK8s 70.25
238 TestNoKubernetes/serial/Start 30.17
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
240 TestNoKubernetes/serial/ProfileList 34.27
256 TestNoKubernetes/serial/Stop 1.51
257 TestNoKubernetes/serial/StartNoArgs 22.44
262 TestPause/serial/Start 78.89
263 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
x
+
TestDownloadOnly/v1.20.0/json-events (53.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-981776 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-981776 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (53.20352516s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (53.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-981776
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-981776: exit status 85 (68.35858ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-981776 | jenkins | v1.33.0 | 28 Apr 24 23:07 UTC |          |
	|         | -p download-only-981776        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 23:07:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 23:07:09.502910   20740 out.go:291] Setting OutFile to fd 1 ...
	I0428 23:07:09.503064   20740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:07:09.503074   20740 out.go:304] Setting ErrFile to fd 2...
	I0428 23:07:09.503078   20740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:07:09.503308   20740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	W0428 23:07:09.503448   20740 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17977-13393/.minikube/config/config.json: open /home/jenkins/minikube-integration/17977-13393/.minikube/config/config.json: no such file or directory
	I0428 23:07:09.504106   20740 out.go:298] Setting JSON to true
	I0428 23:07:09.504970   20740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2974,"bootTime":1714342656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0428 23:07:09.505028   20740 start.go:139] virtualization: kvm guest
	I0428 23:07:09.507376   20740 out.go:97] [download-only-981776] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0428 23:07:09.508977   20740 out.go:169] MINIKUBE_LOCATION=17977
	W0428 23:07:09.507489   20740 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball: no such file or directory
	I0428 23:07:09.507574   20740 notify.go:220] Checking for updates...
	I0428 23:07:09.511767   20740 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 23:07:09.513327   20740 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:07:09.514746   20740 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:07:09.516067   20740 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0428 23:07:09.518549   20740 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0428 23:07:09.518785   20740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 23:07:09.619324   20740 out.go:97] Using the kvm2 driver based on user configuration
	I0428 23:07:09.619351   20740 start.go:297] selected driver: kvm2
	I0428 23:07:09.619357   20740 start.go:901] validating driver "kvm2" against <nil>
	I0428 23:07:09.619705   20740 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 23:07:09.619847   20740 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0428 23:07:09.634338   20740 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0428 23:07:09.634394   20740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 23:07:09.634893   20740 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0428 23:07:09.635044   20740 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0428 23:07:09.635098   20740 cni.go:84] Creating CNI manager for ""
	I0428 23:07:09.635110   20740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0428 23:07:09.635118   20740 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0428 23:07:09.635174   20740 start.go:340] cluster config:
	{Name:download-only-981776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-981776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:07:09.635342   20740 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 23:07:09.637169   20740 out.go:97] Downloading VM boot image ...
	I0428 23:07:09.637195   20740 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17977-13393/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0428 23:07:19.206692   20740 out.go:97] Starting "download-only-981776" primary control-plane node in "download-only-981776" cluster
	I0428 23:07:19.206719   20740 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0428 23:07:19.312023   20740 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0428 23:07:19.312054   20740 cache.go:56] Caching tarball of preloaded images
	I0428 23:07:19.312214   20740 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0428 23:07:19.314105   20740 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0428 23:07:19.314120   20740 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0428 23:07:19.424254   20740 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0428 23:07:32.804844   20740 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0428 23:07:32.804958   20740 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0428 23:07:33.713603   20740 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0428 23:07:33.713970   20740 profile.go:143] Saving config to /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/download-only-981776/config.json ...
	I0428 23:07:33.714008   20740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/download-only-981776/config.json: {Name:mk2c71576204c930bc8e2049c934c1d079cb8438 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0428 23:07:33.714214   20740 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0428 23:07:33.714375   20740 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17977-13393/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-981776 host does not exist
	  To start a cluster, run: "minikube start -p download-only-981776"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-981776
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (12.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-354929 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-354929 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.690112613s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (12.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-354929
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-354929: exit status 85 (71.758795ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-981776 | jenkins | v1.33.0 | 28 Apr 24 23:07 UTC |                     |
	|         | -p download-only-981776        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 28 Apr 24 23:08 UTC | 28 Apr 24 23:08 UTC |
	| delete  | -p download-only-981776        | download-only-981776 | jenkins | v1.33.0 | 28 Apr 24 23:08 UTC | 28 Apr 24 23:08 UTC |
	| start   | -o=json --download-only        | download-only-354929 | jenkins | v1.33.0 | 28 Apr 24 23:08 UTC |                     |
	|         | -p download-only-354929        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/28 23:08:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0428 23:08:03.038501   21084 out.go:291] Setting OutFile to fd 1 ...
	I0428 23:08:03.038731   21084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:08:03.038740   21084 out.go:304] Setting ErrFile to fd 2...
	I0428 23:08:03.038745   21084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:08:03.038904   21084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0428 23:08:03.039484   21084 out.go:298] Setting JSON to true
	I0428 23:08:03.040339   21084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3027,"bootTime":1714342656,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0428 23:08:03.040474   21084 start.go:139] virtualization: kvm guest
	I0428 23:08:03.042693   21084 out.go:97] [download-only-354929] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0428 23:08:03.044195   21084 out.go:169] MINIKUBE_LOCATION=17977
	I0428 23:08:03.042822   21084 notify.go:220] Checking for updates...
	I0428 23:08:03.047094   21084 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 23:08:03.048623   21084 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:08:03.049918   21084 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:08:03.051217   21084 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0428 23:08:03.053653   21084 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0428 23:08:03.053932   21084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 23:08:03.084524   21084 out.go:97] Using the kvm2 driver based on user configuration
	I0428 23:08:03.084559   21084 start.go:297] selected driver: kvm2
	I0428 23:08:03.084569   21084 start.go:901] validating driver "kvm2" against <nil>
	I0428 23:08:03.085260   21084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 23:08:03.085338   21084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17977-13393/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0428 23:08:03.098782   21084 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0428 23:08:03.098839   21084 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0428 23:08:03.099278   21084 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0428 23:08:03.099405   21084 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0428 23:08:03.099447   21084 cni.go:84] Creating CNI manager for ""
	I0428 23:08:03.099459   21084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0428 23:08:03.099469   21084 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0428 23:08:03.099514   21084 start.go:340] cluster config:
	{Name:download-only-354929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-354929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:08:03.099585   21084 iso.go:125] acquiring lock: {Name:mkdac29be984af97898eb097bf04b319c0b5a23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0428 23:08:03.101069   21084 out.go:97] Starting "download-only-354929" primary control-plane node in "download-only-354929" cluster
	I0428 23:08:03.101083   21084 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:08:03.208971   21084 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0428 23:08:03.209033   21084 cache.go:56] Caching tarball of preloaded images
	I0428 23:08:03.209224   21084 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0428 23:08:03.210955   21084 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0428 23:08:03.210971   21084 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0428 23:08:03.318919   21084 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5927bd9d05f26d08fc05540d1d92e5d8 -> /home/jenkins/minikube-integration/17977-13393/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-354929 host does not exist
	  To start a cluster, run: "minikube start -p download-only-354929"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-354929
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-664373 --alsologtostderr --binary-mirror http://127.0.0.1:37097 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-664373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-664373
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (89.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-047422 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-047422 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.329603797s)
helpers_test.go:175: Cleaning up "offline-crio-047422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-047422
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-047422: (1.035681869s)
--- PASS: TestOffline (89.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-971694
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-971694: exit status 85 (64.578256ms)

                                                
                                                
-- stdout --
	* Profile "addons-971694" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-971694"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-971694
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-971694: exit status 85 (60.516404ms)

                                                
                                                
-- stdout --
	* Profile "addons-971694" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-971694"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestCertOptions (47.34s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-124477 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-124477 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (45.203305469s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-124477 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-124477 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-124477 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-124477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-124477
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-124477: (1.62236694s)
--- PASS: TestCertOptions (47.34s)

                                                
                                    
x
+
TestCertExpiration (321.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-523983 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-523983 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m25.553754286s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-523983 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-523983 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (54.671177933s)
helpers_test.go:175: Cleaning up "cert-expiration-523983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-523983
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-523983: (1.044725534s)
--- PASS: TestCertExpiration (321.27s)

                                                
                                    
x
+
TestForceSystemdFlag (101.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-106262 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-106262 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m40.110158906s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-106262 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-106262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-106262
--- PASS: TestForceSystemdFlag (101.32s)

                                                
                                    
x
+
TestForceSystemdEnv (47.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-101301 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-101301 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.365711441s)
helpers_test.go:175: Cleaning up "force-systemd-env-101301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-101301
--- PASS: TestForceSystemdEnv (47.35s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.46s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.46s)

                                                
                                    
x
+
TestErrorSpam/setup (43.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-421844 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-421844 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-421844 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-421844 --driver=kvm2  --container-runtime=crio: (43.067766561s)
--- PASS: TestErrorSpam/setup (43.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (4.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 stop: (2.318685371s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 stop: (1.154086084s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-421844 --log_dir /tmp/nospam-421844 stop: (1.030156188s)
--- PASS: TestErrorSpam/stop (4.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17977-13393/.minikube/files/etc/test/nested/copy/20727/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-243137 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-243137 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m1.871709849s)
--- PASS: TestFunctional/serial/StartWithProxy (61.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.74s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-243137 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-243137 --alsologtostderr -v=8: (34.737179219s)
functional_test.go:659: soft start took 34.737812414s for "functional-243137" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.74s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-243137 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 cache add registry.k8s.io/pause:3.1: (1.023021551s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 cache add registry.k8s.io/pause:3.3: (1.378441062s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 cache add registry.k8s.io/pause:latest: (1.050954416s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-243137 /tmp/TestFunctionalserialCacheCmdcacheadd_local2780574130/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 cache add minikube-local-cache-test:functional-243137
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 cache add minikube-local-cache-test:functional-243137: (2.063570927s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 cache delete minikube-local-cache-test:functional-243137
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-243137
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-243137 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (226.481364ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 kubectl -- --context functional-243137 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-243137 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (285.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-243137 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-243137 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m45.557592758s)
functional_test.go:757: restart took 4m45.557771027s for "functional-243137" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (285.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-243137 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 logs: (1.294258467s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 logs --file /tmp/TestFunctionalserialLogsFileCmd3928585544/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 logs --file /tmp/TestFunctionalserialLogsFileCmd3928585544/001/logs.txt: (1.297043874s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-243137 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-243137
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-243137: exit status 115 (292.792129ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.11:31715 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-243137 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-243137 delete -f testdata/invalidsvc.yaml: (1.04499253s)
--- PASS: TestFunctional/serial/InvalidService (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-243137 config get cpus: exit status 14 (86.543768ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-243137 config get cpus: exit status 14 (55.356471ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-243137 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-243137 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 33979: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-243137 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-243137 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (162.423029ms)

                                                
                                                
-- stdout --
	* [functional-243137] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0428 23:55:49.254219   33731 out.go:291] Setting OutFile to fd 1 ...
	I0428 23:55:49.254331   33731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:55:49.254346   33731 out.go:304] Setting ErrFile to fd 2...
	I0428 23:55:49.254350   33731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:55:49.254603   33731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0428 23:55:49.255133   33731 out.go:298] Setting JSON to false
	I0428 23:55:49.256118   33731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5893,"bootTime":1714342656,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0428 23:55:49.256182   33731 start.go:139] virtualization: kvm guest
	I0428 23:55:49.258835   33731 out.go:177] * [functional-243137] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0428 23:55:49.260335   33731 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 23:55:49.260408   33731 notify.go:220] Checking for updates...
	I0428 23:55:49.261928   33731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 23:55:49.263480   33731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:55:49.264929   33731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:55:49.266222   33731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0428 23:55:49.267576   33731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 23:55:49.269384   33731 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:55:49.269845   33731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:55:49.269901   33731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:55:49.289850   33731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0428 23:55:49.290284   33731 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:55:49.290860   33731 main.go:141] libmachine: Using API Version  1
	I0428 23:55:49.290881   33731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:55:49.291317   33731 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:55:49.291530   33731 main.go:141] libmachine: (functional-243137) Calling .DriverName
	I0428 23:55:49.291813   33731 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 23:55:49.292208   33731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:55:49.292258   33731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:55:49.307635   33731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0428 23:55:49.308090   33731 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:55:49.308644   33731 main.go:141] libmachine: Using API Version  1
	I0428 23:55:49.308660   33731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:55:49.309026   33731 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:55:49.309191   33731 main.go:141] libmachine: (functional-243137) Calling .DriverName
	I0428 23:55:49.347399   33731 out.go:177] * Using the kvm2 driver based on existing profile
	I0428 23:55:49.348907   33731 start.go:297] selected driver: kvm2
	I0428 23:55:49.348928   33731 start.go:901] validating driver "kvm2" against &{Name:functional-243137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-243137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:55:49.349093   33731 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 23:55:49.351897   33731 out.go:177] 
	W0428 23:55:49.353566   33731 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0428 23:55:49.354881   33731 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-243137 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-243137 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-243137 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (163.542581ms)

                                                
                                                
-- stdout --
	* [functional-243137] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0428 23:55:49.090878   33668 out.go:291] Setting OutFile to fd 1 ...
	I0428 23:55:49.091005   33668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:55:49.091017   33668 out.go:304] Setting ErrFile to fd 2...
	I0428 23:55:49.091025   33668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0428 23:55:49.091446   33668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0428 23:55:49.092105   33668 out.go:298] Setting JSON to false
	I0428 23:55:49.093444   33668 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5893,"bootTime":1714342656,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0428 23:55:49.093529   33668 start.go:139] virtualization: kvm guest
	I0428 23:55:49.096194   33668 out.go:177] * [functional-243137] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0428 23:55:49.097861   33668 notify.go:220] Checking for updates...
	I0428 23:55:49.097873   33668 out.go:177]   - MINIKUBE_LOCATION=17977
	I0428 23:55:49.099470   33668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0428 23:55:49.101162   33668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	I0428 23:55:49.102624   33668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	I0428 23:55:49.104085   33668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0428 23:55:49.105547   33668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0428 23:55:49.107288   33668 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0428 23:55:49.107636   33668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:55:49.107674   33668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:55:49.123599   33668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I0428 23:55:49.123993   33668 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:55:49.124542   33668 main.go:141] libmachine: Using API Version  1
	I0428 23:55:49.124572   33668 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:55:49.124879   33668 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:55:49.125064   33668 main.go:141] libmachine: (functional-243137) Calling .DriverName
	I0428 23:55:49.125310   33668 driver.go:392] Setting default libvirt URI to qemu:///system
	I0428 23:55:49.125610   33668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0428 23:55:49.125653   33668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0428 23:55:49.140803   33668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
	I0428 23:55:49.141253   33668 main.go:141] libmachine: () Calling .GetVersion
	I0428 23:55:49.141816   33668 main.go:141] libmachine: Using API Version  1
	I0428 23:55:49.141838   33668 main.go:141] libmachine: () Calling .SetConfigRaw
	I0428 23:55:49.142179   33668 main.go:141] libmachine: () Calling .GetMachineName
	I0428 23:55:49.142319   33668 main.go:141] libmachine: (functional-243137) Calling .DriverName
	I0428 23:55:49.182707   33668 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0428 23:55:49.184515   33668 start.go:297] selected driver: kvm2
	I0428 23:55:49.184534   33668 start.go:901] validating driver "kvm2" against &{Name:functional-243137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-243137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0428 23:55:49.184668   33668 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0428 23:55:49.187057   33668 out.go:177] 
	W0428 23:55:49.188523   33668 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0428 23:55:49.189860   33668 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-243137 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-243137 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-tv5sm" [d0adb447-db3c-4aa6-a45e-285518fb15aa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-tv5sm" [d0adb447-db3c-4aa6-a45e-285518fb15aa] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.005385025s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.11:30481
functional_test.go:1671: http://192.168.39.11:30481: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-tv5sm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.11:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.11:30481
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3f19bf50-0c99-49b5-aa78-2d13c818dc32] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004353361s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-243137 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-243137 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-243137 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-243137 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c4eedbae-6ecd-41a8-a3be-e4b66f7e01bf] Pending
helpers_test.go:344: "sp-pod" [c4eedbae-6ecd-41a8-a3be-e4b66f7e01bf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c4eedbae-6ecd-41a8-a3be-e4b66f7e01bf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.005411551s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-243137 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-243137 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-243137 delete -f testdata/storage-provisioner/pod.yaml: (1.990695359s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-243137 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c955c8c7-7d53-4eec-9ae0-99ee04aa1a08] Pending
helpers_test.go:344: "sp-pod" [c955c8c7-7d53-4eec-9ae0-99ee04aa1a08] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c955c8c7-7d53-4eec-9ae0-99ee04aa1a08] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005024736s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-243137 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh -n functional-243137 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 cp functional-243137:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3867700498/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh -n functional-243137 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh -n functional-243137 "sudo cat /tmp/does/not/exist/cp-test.txt"
2024/04/28 23:56:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-243137 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-qrrh6" [f3c7be29-4887-4ebe-a4b2-5340e0c7253e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-qrrh6" [f3c7be29-4887-4ebe-a4b2-5340e0c7253e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 32.006326338s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-243137 exec mysql-64454c8b5c-qrrh6 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-243137 exec mysql-64454c8b5c-qrrh6 -- mysql -ppassword -e "show databases;": exit status 1 (168.708931ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-243137 exec mysql-64454c8b5c-qrrh6 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-243137 exec mysql-64454c8b5c-qrrh6 -- mysql -ppassword -e "show databases;": exit status 1 (154.151764ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-243137 exec mysql-64454c8b5c-qrrh6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/20727/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "sudo cat /etc/test/nested/copy/20727/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/20727.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "sudo cat /etc/ssl/certs/20727.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/20727.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "sudo cat /usr/share/ca-certificates/20727.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/207272.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "sudo cat /etc/ssl/certs/207272.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/207272.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "sudo cat /usr/share/ca-certificates/207272.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-243137 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-243137 ssh "sudo systemctl is-active docker": exit status 1 (213.551583ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-243137 ssh "sudo systemctl is-active containerd": exit status 1 (221.759822ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-243137 /tmp/TestFunctionalparallelMountCmdany-port1077763862/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714348548627176578" to /tmp/TestFunctionalparallelMountCmdany-port1077763862/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714348548627176578" to /tmp/TestFunctionalparallelMountCmdany-port1077763862/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714348548627176578" to /tmp/TestFunctionalparallelMountCmdany-port1077763862/001/test-1714348548627176578
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-243137 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.616083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 28 23:55 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 28 23:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 28 23:55 test-1714348548627176578
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh cat /mount-9p/test-1714348548627176578
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-243137 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d5b7cf20-6daa-405c-8cc4-34a30dd0b706] Pending
helpers_test.go:344: "busybox-mount" [d5b7cf20-6daa-405c-8cc4-34a30dd0b706] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d5b7cf20-6daa-405c-8cc4-34a30dd0b706] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d5b7cf20-6daa-405c-8cc4-34a30dd0b706] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.00476999s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-243137 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-243137 /tmp/TestFunctionalparallelMountCmdany-port1077763862/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-243137 /tmp/TestFunctionalparallelMountCmdspecific-port3612717753/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-243137 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.973816ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-243137 /tmp/TestFunctionalparallelMountCmdspecific-port3612717753/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-243137 ssh "sudo umount -f /mount-9p": exit status 1 (269.94128ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-243137 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-243137 /tmp/TestFunctionalparallelMountCmdspecific-port3612717753/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-243137 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-243137 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-f9jjz" [d6cb2739-06e1-4ee2-9cf1-a32be15ff03d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-f9jjz" [d6cb2739-06e1-4ee2-9cf1-a32be15ff03d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.006171631s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-243137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup206536709/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-243137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup206536709/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-243137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup206536709/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-243137 ssh "findmnt -T" /mount1: exit status 1 (405.479123ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-243137 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-243137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup206536709/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-243137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup206536709/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-243137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup206536709/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "294.414353ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "56.181227ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "260.333885ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "61.242002ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-243137 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-243137
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-243137
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-243137 image ls --format short --alsologtostderr:
I0428 23:56:39.142453   36030 out.go:291] Setting OutFile to fd 1 ...
I0428 23:56:39.142586   36030 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 23:56:39.142598   36030 out.go:304] Setting ErrFile to fd 2...
I0428 23:56:39.142604   36030 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 23:56:39.142928   36030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
I0428 23:56:39.143662   36030 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0428 23:56:39.143822   36030 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0428 23:56:39.144360   36030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0428 23:56:39.144409   36030 main.go:141] libmachine: Launching plugin server for driver kvm2
I0428 23:56:39.159078   36030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36947
I0428 23:56:39.159524   36030 main.go:141] libmachine: () Calling .GetVersion
I0428 23:56:39.160090   36030 main.go:141] libmachine: Using API Version  1
I0428 23:56:39.160117   36030 main.go:141] libmachine: () Calling .SetConfigRaw
I0428 23:56:39.160439   36030 main.go:141] libmachine: () Calling .GetMachineName
I0428 23:56:39.160623   36030 main.go:141] libmachine: (functional-243137) Calling .GetState
I0428 23:56:39.162426   36030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0428 23:56:39.162472   36030 main.go:141] libmachine: Launching plugin server for driver kvm2
I0428 23:56:39.176732   36030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
I0428 23:56:39.177150   36030 main.go:141] libmachine: () Calling .GetVersion
I0428 23:56:39.177660   36030 main.go:141] libmachine: Using API Version  1
I0428 23:56:39.177679   36030 main.go:141] libmachine: () Calling .SetConfigRaw
I0428 23:56:39.177961   36030 main.go:141] libmachine: () Calling .GetMachineName
I0428 23:56:39.178148   36030 main.go:141] libmachine: (functional-243137) Calling .DriverName
I0428 23:56:39.178353   36030 ssh_runner.go:195] Run: systemctl --version
I0428 23:56:39.178384   36030 main.go:141] libmachine: (functional-243137) Calling .GetSSHHostname
I0428 23:56:39.181235   36030 main.go:141] libmachine: (functional-243137) DBG | domain functional-243137 has defined MAC address 52:54:00:da:e1:76 in network mk-functional-243137
I0428 23:56:39.181706   36030 main.go:141] libmachine: (functional-243137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:e1:76", ip: ""} in network mk-functional-243137: {Iface:virbr1 ExpiryTime:2024-04-29 00:49:26 +0000 UTC Type:0 Mac:52:54:00:da:e1:76 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:functional-243137 Clientid:01:52:54:00:da:e1:76}
I0428 23:56:39.181747   36030 main.go:141] libmachine: (functional-243137) DBG | domain functional-243137 has defined IP address 192.168.39.11 and MAC address 52:54:00:da:e1:76 in network mk-functional-243137
I0428 23:56:39.181784   36030 main.go:141] libmachine: (functional-243137) Calling .GetSSHPort
I0428 23:56:39.181980   36030 main.go:141] libmachine: (functional-243137) Calling .GetSSHKeyPath
I0428 23:56:39.182254   36030 main.go:141] libmachine: (functional-243137) Calling .GetSSHUsername
I0428 23:56:39.182421   36030 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/functional-243137/id_rsa Username:docker}
I0428 23:56:39.269630   36030 ssh_runner.go:195] Run: sudo crictl images --output json
I0428 23:56:39.322588   36030 main.go:141] libmachine: Making call to close driver server
I0428 23:56:39.322604   36030 main.go:141] libmachine: (functional-243137) Calling .Close
I0428 23:56:39.322865   36030 main.go:141] libmachine: Successfully made call to close driver server
I0428 23:56:39.322884   36030 main.go:141] libmachine: Making call to close connection to plugin binary
I0428 23:56:39.322898   36030 main.go:141] libmachine: Making call to close driver server
I0428 23:56:39.322918   36030 main.go:141] libmachine: (functional-243137) Calling .Close
I0428 23:56:39.322927   36030 main.go:141] libmachine: (functional-243137) DBG | Closing plugin on server side
I0428 23:56:39.323171   36030 main.go:141] libmachine: (functional-243137) DBG | Closing plugin on server side
I0428 23:56:39.323205   36030 main.go:141] libmachine: Successfully made call to close driver server
I0428 23:56:39.323242   36030 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-243137 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/google-containers/addon-resizer  | functional-243137  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-243137  | 18fc92f03e9b4 | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.30.0            | c42f13656d0b2 | 118MB  |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 259c8277fcbbc | 63MB   |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.0            | a0bf559e280cf | 85.9MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 7383c266ef252 | 192MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-controller-manager | v1.30.0            | c7aad43836fa5 | 112MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-243137 image ls --format table --alsologtostderr:
I0428 23:56:39.640293   36151 out.go:291] Setting OutFile to fd 1 ...
I0428 23:56:39.640400   36151 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 23:56:39.640409   36151 out.go:304] Setting ErrFile to fd 2...
I0428 23:56:39.640413   36151 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 23:56:39.640580   36151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
I0428 23:56:39.641285   36151 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0428 23:56:39.641444   36151 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0428 23:56:39.641851   36151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0428 23:56:39.641884   36151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0428 23:56:39.656477   36151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35411
I0428 23:56:39.656902   36151 main.go:141] libmachine: () Calling .GetVersion
I0428 23:56:39.657513   36151 main.go:141] libmachine: Using API Version  1
I0428 23:56:39.657547   36151 main.go:141] libmachine: () Calling .SetConfigRaw
I0428 23:56:39.657914   36151 main.go:141] libmachine: () Calling .GetMachineName
I0428 23:56:39.658130   36151 main.go:141] libmachine: (functional-243137) Calling .GetState
I0428 23:56:39.660308   36151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0428 23:56:39.660358   36151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0428 23:56:39.675107   36151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44969
I0428 23:56:39.675477   36151 main.go:141] libmachine: () Calling .GetVersion
I0428 23:56:39.675990   36151 main.go:141] libmachine: Using API Version  1
I0428 23:56:39.676014   36151 main.go:141] libmachine: () Calling .SetConfigRaw
I0428 23:56:39.676794   36151 main.go:141] libmachine: () Calling .GetMachineName
I0428 23:56:39.676972   36151 main.go:141] libmachine: (functional-243137) Calling .DriverName
I0428 23:56:39.677192   36151 ssh_runner.go:195] Run: systemctl --version
I0428 23:56:39.677218   36151 main.go:141] libmachine: (functional-243137) Calling .GetSSHHostname
I0428 23:56:39.679841   36151 main.go:141] libmachine: (functional-243137) DBG | domain functional-243137 has defined MAC address 52:54:00:da:e1:76 in network mk-functional-243137
I0428 23:56:39.680239   36151 main.go:141] libmachine: (functional-243137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:e1:76", ip: ""} in network mk-functional-243137: {Iface:virbr1 ExpiryTime:2024-04-29 00:49:26 +0000 UTC Type:0 Mac:52:54:00:da:e1:76 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:functional-243137 Clientid:01:52:54:00:da:e1:76}
I0428 23:56:39.680275   36151 main.go:141] libmachine: (functional-243137) DBG | domain functional-243137 has defined IP address 192.168.39.11 and MAC address 52:54:00:da:e1:76 in network mk-functional-243137
I0428 23:56:39.680389   36151 main.go:141] libmachine: (functional-243137) Calling .GetSSHPort
I0428 23:56:39.680577   36151 main.go:141] libmachine: (functional-243137) Calling .GetSSHKeyPath
I0428 23:56:39.680746   36151 main.go:141] libmachine: (functional-243137) Calling .GetSSHUsername
I0428 23:56:39.680892   36151 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/functional-243137/id_rsa Username:docker}
I0428 23:56:39.774673   36151 ssh_runner.go:195] Run: sudo crictl images --output json
I0428 23:56:39.843435   36151 main.go:141] libmachine: Making call to close driver server
I0428 23:56:39.843452   36151 main.go:141] libmachine: (functional-243137) Calling .Close
I0428 23:56:39.843753   36151 main.go:141] libmachine: Successfully made call to close driver server
I0428 23:56:39.843771   36151 main.go:141] libmachine: Making call to close connection to plugin binary
I0428 23:56:39.843783   36151 main.go:141] libmachine: Making call to close driver server
I0428 23:56:39.843792   36151 main.go:141] libmachine: (functional-243137) Calling .Close
I0428 23:56:39.843993   36151 main.go:141] libmachine: Successfully made call to close driver server
I0428 23:56:39.844005   36151 main.go:141] libmachine: Making call to close connection to plugin binary
I0428 23:56:39.844083   36151 main.go:141] libmachine: (functional-243137) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-243137 image ls --format json --alsologtostderr:
[{"id":"18fc92f03e9b43d0a236d7a1a14f710e6dadc157ddd186d3902c12041bf4a7f0","repoDigests":["localhost/minikube-local-cache-test@sha256:c4578534831480265d69dc0cbdc9cb4ec01203f11f399b978f422d6cd7de7cfc"],"repoTags":["localhost/minikube-local-cache-test:functional-243137"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"siz
e":"117609952"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"112170310"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"85932953"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id"
:"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-243137"],"size":"34114467"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":["docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8","docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"191760844"},{"id":"56cc512116c8f894f11ce1995460ae
f1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67","registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"63026502"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoD
igests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bb
c1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDiges
ts":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-243137 image ls --format json --alsologtostderr:
I0428 23:56:39.397468   36085 out.go:291] Setting OutFile to fd 1 ...
I0428 23:56:39.397579   36085 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 23:56:39.397589   36085 out.go:304] Setting ErrFile to fd 2...
I0428 23:56:39.397594   36085 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 23:56:39.397795   36085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
I0428 23:56:39.398344   36085 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0428 23:56:39.398440   36085 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0428 23:56:39.398799   36085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0428 23:56:39.398836   36085 main.go:141] libmachine: Launching plugin server for driver kvm2
I0428 23:56:39.414386   36085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38063
I0428 23:56:39.414933   36085 main.go:141] libmachine: () Calling .GetVersion
I0428 23:56:39.415539   36085 main.go:141] libmachine: Using API Version  1
I0428 23:56:39.415558   36085 main.go:141] libmachine: () Calling .SetConfigRaw
I0428 23:56:39.415941   36085 main.go:141] libmachine: () Calling .GetMachineName
I0428 23:56:39.416138   36085 main.go:141] libmachine: (functional-243137) Calling .GetState
I0428 23:56:39.418550   36085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0428 23:56:39.418595   36085 main.go:141] libmachine: Launching plugin server for driver kvm2
I0428 23:56:39.435416   36085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39785
I0428 23:56:39.435913   36085 main.go:141] libmachine: () Calling .GetVersion
I0428 23:56:39.436519   36085 main.go:141] libmachine: Using API Version  1
I0428 23:56:39.436550   36085 main.go:141] libmachine: () Calling .SetConfigRaw
I0428 23:56:39.436998   36085 main.go:141] libmachine: () Calling .GetMachineName
I0428 23:56:39.437156   36085 main.go:141] libmachine: (functional-243137) Calling .DriverName
I0428 23:56:39.437375   36085 ssh_runner.go:195] Run: systemctl --version
I0428 23:56:39.437405   36085 main.go:141] libmachine: (functional-243137) Calling .GetSSHHostname
I0428 23:56:39.440493   36085 main.go:141] libmachine: (functional-243137) DBG | domain functional-243137 has defined MAC address 52:54:00:da:e1:76 in network mk-functional-243137
I0428 23:56:39.440925   36085 main.go:141] libmachine: (functional-243137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:e1:76", ip: ""} in network mk-functional-243137: {Iface:virbr1 ExpiryTime:2024-04-29 00:49:26 +0000 UTC Type:0 Mac:52:54:00:da:e1:76 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:functional-243137 Clientid:01:52:54:00:da:e1:76}
I0428 23:56:39.440956   36085 main.go:141] libmachine: (functional-243137) DBG | domain functional-243137 has defined IP address 192.168.39.11 and MAC address 52:54:00:da:e1:76 in network mk-functional-243137
I0428 23:56:39.441088   36085 main.go:141] libmachine: (functional-243137) Calling .GetSSHPort
I0428 23:56:39.441298   36085 main.go:141] libmachine: (functional-243137) Calling .GetSSHKeyPath
I0428 23:56:39.441465   36085 main.go:141] libmachine: (functional-243137) Calling .GetSSHUsername
I0428 23:56:39.441606   36085 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/functional-243137/id_rsa Username:docker}
I0428 23:56:39.540085   36085 ssh_runner.go:195] Run: sudo crictl images --output json
I0428 23:56:39.612192   36085 main.go:141] libmachine: Making call to close driver server
I0428 23:56:39.612205   36085 main.go:141] libmachine: (functional-243137) Calling .Close
I0428 23:56:39.612528   36085 main.go:141] libmachine: (functional-243137) DBG | Closing plugin on server side
I0428 23:56:39.612560   36085 main.go:141] libmachine: Successfully made call to close driver server
I0428 23:56:39.612566   36085 main.go:141] libmachine: Making call to close connection to plugin binary
I0428 23:56:39.612572   36085 main.go:141] libmachine: Making call to close driver server
I0428 23:56:39.612576   36085 main.go:141] libmachine: (functional-243137) Calling .Close
I0428 23:56:39.614094   36085 main.go:141] libmachine: (functional-243137) DBG | Closing plugin on server side
I0428 23:56:39.614116   36085 main.go:141] libmachine: Successfully made call to close driver server
I0428 23:56:39.614129   36085 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-243137 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "85932953"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 18fc92f03e9b43d0a236d7a1a14f710e6dadc157ddd186d3902c12041bf4a7f0
repoDigests:
- localhost/minikube-local-cache-test@sha256:c4578534831480265d69dc0cbdc9cb4ec01203f11f399b978f422d6cd7de7cfc
repoTags:
- localhost/minikube-local-cache-test:functional-243137
size: "3330"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "112170310"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-243137
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117609952"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
- registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "63026502"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests:
- docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8
- docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee
repoTags:
- docker.io/library/nginx:latest
size: "191760844"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-243137 image ls --format yaml --alsologtostderr:
I0428 23:56:39.142453   36031 out.go:291] Setting OutFile to fd 1 ...
I0428 23:56:39.142590   36031 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 23:56:39.142598   36031 out.go:304] Setting ErrFile to fd 2...
I0428 23:56:39.142604   36031 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 23:56:39.142909   36031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
I0428 23:56:39.143662   36031 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0428 23:56:39.143808   36031 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0428 23:56:39.144360   36031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0428 23:56:39.144407   36031 main.go:141] libmachine: Launching plugin server for driver kvm2
I0428 23:56:39.158920   36031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
I0428 23:56:39.159398   36031 main.go:141] libmachine: () Calling .GetVersion
I0428 23:56:39.159995   36031 main.go:141] libmachine: Using API Version  1
I0428 23:56:39.160016   36031 main.go:141] libmachine: () Calling .SetConfigRaw
I0428 23:56:39.160384   36031 main.go:141] libmachine: () Calling .GetMachineName
I0428 23:56:39.160586   36031 main.go:141] libmachine: (functional-243137) Calling .GetState
I0428 23:56:39.162495   36031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0428 23:56:39.162537   36031 main.go:141] libmachine: Launching plugin server for driver kvm2
I0428 23:56:39.177082   36031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42165
I0428 23:56:39.177425   36031 main.go:141] libmachine: () Calling .GetVersion
I0428 23:56:39.177921   36031 main.go:141] libmachine: Using API Version  1
I0428 23:56:39.177945   36031 main.go:141] libmachine: () Calling .SetConfigRaw
I0428 23:56:39.178264   36031 main.go:141] libmachine: () Calling .GetMachineName
I0428 23:56:39.178477   36031 main.go:141] libmachine: (functional-243137) Calling .DriverName
I0428 23:56:39.178651   36031 ssh_runner.go:195] Run: systemctl --version
I0428 23:56:39.178676   36031 main.go:141] libmachine: (functional-243137) Calling .GetSSHHostname
I0428 23:56:39.181580   36031 main.go:141] libmachine: (functional-243137) DBG | domain functional-243137 has defined MAC address 52:54:00:da:e1:76 in network mk-functional-243137
I0428 23:56:39.182103   36031 main.go:141] libmachine: (functional-243137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:e1:76", ip: ""} in network mk-functional-243137: {Iface:virbr1 ExpiryTime:2024-04-29 00:49:26 +0000 UTC Type:0 Mac:52:54:00:da:e1:76 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:functional-243137 Clientid:01:52:54:00:da:e1:76}
I0428 23:56:39.182134   36031 main.go:141] libmachine: (functional-243137) DBG | domain functional-243137 has defined IP address 192.168.39.11 and MAC address 52:54:00:da:e1:76 in network mk-functional-243137
I0428 23:56:39.182209   36031 main.go:141] libmachine: (functional-243137) Calling .GetSSHPort
I0428 23:56:39.182335   36031 main.go:141] libmachine: (functional-243137) Calling .GetSSHKeyPath
I0428 23:56:39.182488   36031 main.go:141] libmachine: (functional-243137) Calling .GetSSHUsername
I0428 23:56:39.182622   36031 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/functional-243137/id_rsa Username:docker}
I0428 23:56:39.273037   36031 ssh_runner.go:195] Run: sudo crictl images --output json
I0428 23:56:39.333567   36031 main.go:141] libmachine: Making call to close driver server
I0428 23:56:39.333592   36031 main.go:141] libmachine: (functional-243137) Calling .Close
I0428 23:56:39.333924   36031 main.go:141] libmachine: Successfully made call to close driver server
I0428 23:56:39.333941   36031 main.go:141] libmachine: Making call to close connection to plugin binary
I0428 23:56:39.333955   36031 main.go:141] libmachine: Making call to close driver server
I0428 23:56:39.333968   36031 main.go:141] libmachine: (functional-243137) Calling .Close
I0428 23:56:39.334264   36031 main.go:141] libmachine: Successfully made call to close driver server
I0428 23:56:39.334284   36031 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-243137 ssh pgrep buildkitd: exit status 1 (229.579689ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image build -t localhost/my-image:functional-243137 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 image build -t localhost/my-image:functional-243137 testdata/build --alsologtostderr: (3.436303334s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-243137 image build -t localhost/my-image:functional-243137 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 211ba2d18ce
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-243137
--> eff7337aa9f
Successfully tagged localhost/my-image:functional-243137
eff7337aa9f10a3fc6a64cfbc348ccf151ac8d3e9c5135a96f174b432754ec7e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-243137 image build -t localhost/my-image:functional-243137 testdata/build --alsologtostderr:
I0428 23:56:39.618994   36141 out.go:291] Setting OutFile to fd 1 ...
I0428 23:56:39.619252   36141 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 23:56:39.619265   36141 out.go:304] Setting ErrFile to fd 2...
I0428 23:56:39.619272   36141 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0428 23:56:39.619590   36141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
I0428 23:56:39.620272   36141 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0428 23:56:39.620950   36141 config.go:182] Loaded profile config "functional-243137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0428 23:56:39.621494   36141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0428 23:56:39.621543   36141 main.go:141] libmachine: Launching plugin server for driver kvm2
I0428 23:56:39.637784   36141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42801
I0428 23:56:39.638424   36141 main.go:141] libmachine: () Calling .GetVersion
I0428 23:56:39.639140   36141 main.go:141] libmachine: Using API Version  1
I0428 23:56:39.639171   36141 main.go:141] libmachine: () Calling .SetConfigRaw
I0428 23:56:39.639560   36141 main.go:141] libmachine: () Calling .GetMachineName
I0428 23:56:39.639767   36141 main.go:141] libmachine: (functional-243137) Calling .GetState
I0428 23:56:39.641784   36141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0428 23:56:39.641826   36141 main.go:141] libmachine: Launching plugin server for driver kvm2
I0428 23:56:39.656357   36141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46653
I0428 23:56:39.656918   36141 main.go:141] libmachine: () Calling .GetVersion
I0428 23:56:39.657408   36141 main.go:141] libmachine: Using API Version  1
I0428 23:56:39.657428   36141 main.go:141] libmachine: () Calling .SetConfigRaw
I0428 23:56:39.657765   36141 main.go:141] libmachine: () Calling .GetMachineName
I0428 23:56:39.657939   36141 main.go:141] libmachine: (functional-243137) Calling .DriverName
I0428 23:56:39.658154   36141 ssh_runner.go:195] Run: systemctl --version
I0428 23:56:39.658183   36141 main.go:141] libmachine: (functional-243137) Calling .GetSSHHostname
I0428 23:56:39.660872   36141 main.go:141] libmachine: (functional-243137) DBG | domain functional-243137 has defined MAC address 52:54:00:da:e1:76 in network mk-functional-243137
I0428 23:56:39.661222   36141 main.go:141] libmachine: (functional-243137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:e1:76", ip: ""} in network mk-functional-243137: {Iface:virbr1 ExpiryTime:2024-04-29 00:49:26 +0000 UTC Type:0 Mac:52:54:00:da:e1:76 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:functional-243137 Clientid:01:52:54:00:da:e1:76}
I0428 23:56:39.661250   36141 main.go:141] libmachine: (functional-243137) DBG | domain functional-243137 has defined IP address 192.168.39.11 and MAC address 52:54:00:da:e1:76 in network mk-functional-243137
I0428 23:56:39.661328   36141 main.go:141] libmachine: (functional-243137) Calling .GetSSHPort
I0428 23:56:39.661491   36141 main.go:141] libmachine: (functional-243137) Calling .GetSSHKeyPath
I0428 23:56:39.661592   36141 main.go:141] libmachine: (functional-243137) Calling .GetSSHUsername
I0428 23:56:39.661732   36141 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/functional-243137/id_rsa Username:docker}
I0428 23:56:39.749187   36141 build_images.go:161] Building image from path: /tmp/build.3288419326.tar
I0428 23:56:39.749239   36141 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0428 23:56:39.763949   36141 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3288419326.tar
I0428 23:56:39.769716   36141 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3288419326.tar: stat -c "%s %y" /var/lib/minikube/build/build.3288419326.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3288419326.tar': No such file or directory
I0428 23:56:39.769757   36141 ssh_runner.go:362] scp /tmp/build.3288419326.tar --> /var/lib/minikube/build/build.3288419326.tar (3072 bytes)
I0428 23:56:39.827461   36141 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3288419326
I0428 23:56:39.862872   36141 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3288419326 -xf /var/lib/minikube/build/build.3288419326.tar
I0428 23:56:39.874772   36141 crio.go:315] Building image: /var/lib/minikube/build/build.3288419326
I0428 23:56:39.874863   36141 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-243137 /var/lib/minikube/build/build.3288419326 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0428 23:56:42.966432   36141 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-243137 /var/lib/minikube/build/build.3288419326 --cgroup-manager=cgroupfs: (3.091545409s)
I0428 23:56:42.966510   36141 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3288419326
I0428 23:56:42.979336   36141 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3288419326.tar
I0428 23:56:42.991511   36141 build_images.go:217] Built localhost/my-image:functional-243137 from /tmp/build.3288419326.tar
I0428 23:56:42.991539   36141 build_images.go:133] succeeded building to: functional-243137
I0428 23:56:42.991544   36141 build_images.go:134] failed building to: 
I0428 23:56:42.991564   36141 main.go:141] libmachine: Making call to close driver server
I0428 23:56:42.991576   36141 main.go:141] libmachine: (functional-243137) Calling .Close
I0428 23:56:42.991841   36141 main.go:141] libmachine: Successfully made call to close driver server
I0428 23:56:42.991858   36141 main.go:141] libmachine: (functional-243137) DBG | Closing plugin on server side
I0428 23:56:42.991865   36141 main.go:141] libmachine: Making call to close connection to plugin binary
I0428 23:56:42.991881   36141 main.go:141] libmachine: Making call to close driver server
I0428 23:56:42.991890   36141 main.go:141] libmachine: (functional-243137) Calling .Close
I0428 23:56:42.992138   36141 main.go:141] libmachine: Successfully made call to close driver server
I0428 23:56:42.992146   36141 main.go:141] libmachine: (functional-243137) DBG | Closing plugin on server side
I0428 23:56:42.992163   36141 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.008192235s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-243137
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image load --daemon gcr.io/google-containers/addon-resizer:functional-243137 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 image load --daemon gcr.io/google-containers/addon-resizer:functional-243137 --alsologtostderr: (7.339583994s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 service list: (1.365781244s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 service list -o json: (1.339304966s)
functional_test.go:1490: Took "1.339415698s" to run "out/minikube-linux-amd64 -p functional-243137 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.11:30667
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.11:30667
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image load --daemon gcr.io/google-containers/addon-resizer:functional-243137 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 image load --daemon gcr.io/google-containers/addon-resizer:functional-243137 --alsologtostderr: (2.686299443s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.040113847s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-243137
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image load --daemon gcr.io/google-containers/addon-resizer:functional-243137 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 image load --daemon gcr.io/google-containers/addon-resizer:functional-243137 --alsologtostderr: (10.633273093s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image save gcr.io/google-containers/addon-resizer:functional-243137 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 image save gcr.io/google-containers/addon-resizer:functional-243137 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.425594654s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image rm gcr.io/google-containers/addon-resizer:functional-243137 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.601794878s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-243137
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-243137 image save --daemon gcr.io/google-containers/addon-resizer:functional-243137 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-243137 image save --daemon gcr.io/google-containers/addon-resizer:functional-243137 --alsologtostderr: (1.321363361s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-243137
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-243137
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-243137
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-243137
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (215.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-274394 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-274394 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m34.664287468s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (215.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-274394 -- rollout status deployment/busybox: (4.43882406s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-kjcqn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-tmk6v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-wwl6p -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-kjcqn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-tmk6v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-wwl6p -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-kjcqn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-tmk6v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-wwl6p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-kjcqn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-kjcqn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-tmk6v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-tmk6v -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-wwl6p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274394 -- exec busybox-fc5497c4f-wwl6p -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-274394 -v=7 --alsologtostderr
E0429 00:00:48.629131   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:00:48.634848   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:00:48.645169   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:00:48.665468   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:00:48.705823   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:00:48.786115   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:00:48.946480   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:00:49.267082   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:00:49.908099   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:00:51.188586   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:00:53.749719   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:00:58.870211   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:01:09.111149   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-274394 -v=7 --alsologtostderr: (47.636645304s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-274394 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp testdata/cp-test.txt ha-274394:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3174175435/001/cp-test_ha-274394.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394:/home/docker/cp-test.txt ha-274394-m02:/home/docker/cp-test_ha-274394_ha-274394-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m02 "sudo cat /home/docker/cp-test_ha-274394_ha-274394-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394:/home/docker/cp-test.txt ha-274394-m03:/home/docker/cp-test_ha-274394_ha-274394-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m03 "sudo cat /home/docker/cp-test_ha-274394_ha-274394-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394:/home/docker/cp-test.txt ha-274394-m04:/home/docker/cp-test_ha-274394_ha-274394-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m04 "sudo cat /home/docker/cp-test_ha-274394_ha-274394-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp testdata/cp-test.txt ha-274394-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3174175435/001/cp-test_ha-274394-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m02:/home/docker/cp-test.txt ha-274394:/home/docker/cp-test_ha-274394-m02_ha-274394.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394 "sudo cat /home/docker/cp-test_ha-274394-m02_ha-274394.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m02:/home/docker/cp-test.txt ha-274394-m03:/home/docker/cp-test_ha-274394-m02_ha-274394-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m03 "sudo cat /home/docker/cp-test_ha-274394-m02_ha-274394-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m02:/home/docker/cp-test.txt ha-274394-m04:/home/docker/cp-test_ha-274394-m02_ha-274394-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m04 "sudo cat /home/docker/cp-test_ha-274394-m02_ha-274394-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp testdata/cp-test.txt ha-274394-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3174175435/001/cp-test_ha-274394-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt ha-274394:/home/docker/cp-test_ha-274394-m03_ha-274394.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394 "sudo cat /home/docker/cp-test_ha-274394-m03_ha-274394.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt ha-274394-m02:/home/docker/cp-test_ha-274394-m03_ha-274394-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m02 "sudo cat /home/docker/cp-test_ha-274394-m03_ha-274394-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m03:/home/docker/cp-test.txt ha-274394-m04:/home/docker/cp-test_ha-274394-m03_ha-274394-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m04 "sudo cat /home/docker/cp-test_ha-274394-m03_ha-274394-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp testdata/cp-test.txt ha-274394-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3174175435/001/cp-test_ha-274394-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt ha-274394:/home/docker/cp-test_ha-274394-m04_ha-274394.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394 "sudo cat /home/docker/cp-test_ha-274394-m04_ha-274394.txt"
E0429 00:01:29.591736   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt ha-274394-m02:/home/docker/cp-test_ha-274394-m04_ha-274394-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m02 "sudo cat /home/docker/cp-test_ha-274394-m04_ha-274394-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 cp ha-274394-m04:/home/docker/cp-test.txt ha-274394-m03:/home/docker/cp-test_ha-274394-m04_ha-274394-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 ssh -n ha-274394-m03 "sudo cat /home/docker/cp-test_ha-274394-m04_ha-274394-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.511189244s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-274394 node delete m03 -v=7 --alsologtostderr: (16.77128546s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (348.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-274394 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0429 00:15:48.628627   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
E0429 00:17:11.675184   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-274394 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m47.243453973s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (348.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-274394 --control-plane -v=7 --alsologtostderr
E0429 00:20:48.629444   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-274394 --control-plane -v=7 --alsologtostderr: (1m17.791836384s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-274394 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-088510 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-088510 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.293322074s)
--- PASS: TestJSONOutput/start/Command (60.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-088510 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-088510 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-088510 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-088510 --output=json --user=testUser: (8.401673186s)
--- PASS: TestJSONOutput/stop/Command (8.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-823760 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-823760 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.798519ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b8be1959-4489-491e-b0a3-75127f4e97b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-823760] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba51fa2f-8a5a-4ca6-8d5a-3d9d07885b5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17977"}}
	{"specversion":"1.0","id":"aadc7196-7561-45a9-bf32-d7f80adcb573","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bc51ec17-12df-4cf4-828a-bce0e327d343","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig"}}
	{"specversion":"1.0","id":"eeaf8f43-75bd-468e-b825-43476f3a1911","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube"}}
	{"specversion":"1.0","id":"3e2aab8e-4df4-4433-afd0-417080280e10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"836180d8-e3db-4f97-bf83-cfbde6a450d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aa938968-8936-46d4-bbc9-ebf7e9a17d34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-823760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-823760
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (94.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-829276 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-829276 --driver=kvm2  --container-runtime=crio: (44.951221065s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-831981 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-831981 --driver=kvm2  --container-runtime=crio: (46.625632662s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-829276
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-831981
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-831981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-831981
helpers_test.go:175: Cleaning up "first-829276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-829276
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-829276: (1.028115965s)
--- PASS: TestMinikubeProfile (94.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-993590 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-993590 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.196737045s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-993590 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-993590 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-008699 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-008699 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.478712428s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008699 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008699 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-993590 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008699 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008699 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-008699
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-008699: (1.725265909s)
--- PASS: TestMountStart/serial/Stop (1.73s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-008699
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-008699: (23.936843868s)
--- PASS: TestMountStart/serial/RestartStopped (24.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008699 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008699 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-061470 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0429 00:25:48.629214   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-061470 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m13.822715491s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-061470 -- rollout status deployment/busybox: (4.985377978s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- exec busybox-fc5497c4f-hbcvz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- exec busybox-fc5497c4f-tzfdc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- exec busybox-fc5497c4f-hbcvz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- exec busybox-fc5497c4f-tzfdc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- exec busybox-fc5497c4f-hbcvz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- exec busybox-fc5497c4f-tzfdc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.63s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- exec busybox-fc5497c4f-hbcvz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- exec busybox-fc5497c4f-hbcvz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- exec busybox-fc5497c4f-tzfdc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-061470 -- exec busybox-fc5497c4f-tzfdc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-061470 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-061470 -v 3 --alsologtostderr: (44.565948069s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.17s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-061470 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp testdata/cp-test.txt multinode-061470:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp multinode-061470:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3750174102/001/cp-test_multinode-061470.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp multinode-061470:/home/docker/cp-test.txt multinode-061470-m02:/home/docker/cp-test_multinode-061470_multinode-061470-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m02 "sudo cat /home/docker/cp-test_multinode-061470_multinode-061470-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp multinode-061470:/home/docker/cp-test.txt multinode-061470-m03:/home/docker/cp-test_multinode-061470_multinode-061470-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m03 "sudo cat /home/docker/cp-test_multinode-061470_multinode-061470-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp testdata/cp-test.txt multinode-061470-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp multinode-061470-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3750174102/001/cp-test_multinode-061470-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp multinode-061470-m02:/home/docker/cp-test.txt multinode-061470:/home/docker/cp-test_multinode-061470-m02_multinode-061470.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470 "sudo cat /home/docker/cp-test_multinode-061470-m02_multinode-061470.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp multinode-061470-m02:/home/docker/cp-test.txt multinode-061470-m03:/home/docker/cp-test_multinode-061470-m02_multinode-061470-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m03 "sudo cat /home/docker/cp-test_multinode-061470-m02_multinode-061470-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp testdata/cp-test.txt multinode-061470-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp multinode-061470-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3750174102/001/cp-test_multinode-061470-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp multinode-061470-m03:/home/docker/cp-test.txt multinode-061470:/home/docker/cp-test_multinode-061470-m03_multinode-061470.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470 "sudo cat /home/docker/cp-test_multinode-061470-m03_multinode-061470.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 cp multinode-061470-m03:/home/docker/cp-test.txt multinode-061470-m02:/home/docker/cp-test_multinode-061470-m03_multinode-061470-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 ssh -n multinode-061470-m02 "sudo cat /home/docker/cp-test_multinode-061470-m03_multinode-061470-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-061470 node stop m03: (2.296724521s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-061470 status: exit status 7 (427.36809ms)

                                                
                                                
-- stdout --
	multinode-061470
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-061470-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-061470-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-061470 status --alsologtostderr: exit status 7 (432.486769ms)

                                                
                                                
-- stdout --
	multinode-061470
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-061470-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-061470-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 00:28:27.841581   53903 out.go:291] Setting OutFile to fd 1 ...
	I0429 00:28:27.841831   53903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:28:27.841840   53903 out.go:304] Setting ErrFile to fd 2...
	I0429 00:28:27.841844   53903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 00:28:27.842447   53903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17977-13393/.minikube/bin
	I0429 00:28:27.842740   53903 out.go:298] Setting JSON to false
	I0429 00:28:27.842774   53903 mustload.go:65] Loading cluster: multinode-061470
	I0429 00:28:27.843025   53903 notify.go:220] Checking for updates...
	I0429 00:28:27.843596   53903 config.go:182] Loaded profile config "multinode-061470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 00:28:27.843619   53903 status.go:255] checking status of multinode-061470 ...
	I0429 00:28:27.843987   53903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:28:27.844019   53903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:28:27.860277   53903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
	I0429 00:28:27.860704   53903 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:28:27.861310   53903 main.go:141] libmachine: Using API Version  1
	I0429 00:28:27.861336   53903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:28:27.861750   53903 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:28:27.861977   53903 main.go:141] libmachine: (multinode-061470) Calling .GetState
	I0429 00:28:27.863534   53903 status.go:330] multinode-061470 host status = "Running" (err=<nil>)
	I0429 00:28:27.863553   53903 host.go:66] Checking if "multinode-061470" exists ...
	I0429 00:28:27.863959   53903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:28:27.864003   53903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:28:27.878700   53903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41141
	I0429 00:28:27.879083   53903 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:28:27.879494   53903 main.go:141] libmachine: Using API Version  1
	I0429 00:28:27.879516   53903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:28:27.879817   53903 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:28:27.879955   53903 main.go:141] libmachine: (multinode-061470) Calling .GetIP
	I0429 00:28:27.882463   53903 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:28:27.882893   53903 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:28:27.882923   53903 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:28:27.883039   53903 host.go:66] Checking if "multinode-061470" exists ...
	I0429 00:28:27.883374   53903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:28:27.883419   53903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:28:27.897474   53903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43723
	I0429 00:28:27.897767   53903 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:28:27.898175   53903 main.go:141] libmachine: Using API Version  1
	I0429 00:28:27.898199   53903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:28:27.898501   53903 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:28:27.898679   53903 main.go:141] libmachine: (multinode-061470) Calling .DriverName
	I0429 00:28:27.898845   53903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:28:27.898862   53903 main.go:141] libmachine: (multinode-061470) Calling .GetSSHHostname
	I0429 00:28:27.900993   53903 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:28:27.901393   53903 main.go:141] libmachine: (multinode-061470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:3a:ff", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:25:26 +0000 UTC Type:0 Mac:52:54:00:7e:3a:ff Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-061470 Clientid:01:52:54:00:7e:3a:ff}
	I0429 00:28:27.901422   53903 main.go:141] libmachine: (multinode-061470) DBG | domain multinode-061470 has defined IP address 192.168.39.59 and MAC address 52:54:00:7e:3a:ff in network mk-multinode-061470
	I0429 00:28:27.901573   53903 main.go:141] libmachine: (multinode-061470) Calling .GetSSHPort
	I0429 00:28:27.901714   53903 main.go:141] libmachine: (multinode-061470) Calling .GetSSHKeyPath
	I0429 00:28:27.901871   53903 main.go:141] libmachine: (multinode-061470) Calling .GetSSHUsername
	I0429 00:28:27.901996   53903 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/multinode-061470/id_rsa Username:docker}
	I0429 00:28:27.986073   53903 ssh_runner.go:195] Run: systemctl --version
	I0429 00:28:27.992498   53903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:28:28.008206   53903 kubeconfig.go:125] found "multinode-061470" server: "https://192.168.39.59:8443"
	I0429 00:28:28.008240   53903 api_server.go:166] Checking apiserver status ...
	I0429 00:28:28.008278   53903 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 00:28:28.026517   53903 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1122/cgroup
	W0429 00:28:28.038975   53903 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1122/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 00:28:28.039034   53903 ssh_runner.go:195] Run: ls
	I0429 00:28:28.043639   53903 api_server.go:253] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I0429 00:28:28.048182   53903 api_server.go:279] https://192.168.39.59:8443/healthz returned 200:
	ok
	I0429 00:28:28.048203   53903 status.go:422] multinode-061470 apiserver status = Running (err=<nil>)
	I0429 00:28:28.048216   53903 status.go:257] multinode-061470 status: &{Name:multinode-061470 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:28:28.048243   53903 status.go:255] checking status of multinode-061470-m02 ...
	I0429 00:28:28.048539   53903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:28:28.048578   53903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:28:28.063890   53903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41595
	I0429 00:28:28.064249   53903 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:28:28.064763   53903 main.go:141] libmachine: Using API Version  1
	I0429 00:28:28.064783   53903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:28:28.065107   53903 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:28:28.065278   53903 main.go:141] libmachine: (multinode-061470-m02) Calling .GetState
	I0429 00:28:28.066764   53903 status.go:330] multinode-061470-m02 host status = "Running" (err=<nil>)
	I0429 00:28:28.066789   53903 host.go:66] Checking if "multinode-061470-m02" exists ...
	I0429 00:28:28.067084   53903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:28:28.067124   53903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:28:28.081092   53903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0429 00:28:28.081474   53903 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:28:28.081908   53903 main.go:141] libmachine: Using API Version  1
	I0429 00:28:28.081930   53903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:28:28.082205   53903 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:28:28.082414   53903 main.go:141] libmachine: (multinode-061470-m02) Calling .GetIP
	I0429 00:28:28.084911   53903 main.go:141] libmachine: (multinode-061470-m02) DBG | domain multinode-061470-m02 has defined MAC address 52:54:00:ea:fa:31 in network mk-multinode-061470
	I0429 00:28:28.085338   53903 main.go:141] libmachine: (multinode-061470-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:fa:31", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:26:56 +0000 UTC Type:0 Mac:52:54:00:ea:fa:31 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-061470-m02 Clientid:01:52:54:00:ea:fa:31}
	I0429 00:28:28.085367   53903 main.go:141] libmachine: (multinode-061470-m02) DBG | domain multinode-061470-m02 has defined IP address 192.168.39.153 and MAC address 52:54:00:ea:fa:31 in network mk-multinode-061470
	I0429 00:28:28.085459   53903 host.go:66] Checking if "multinode-061470-m02" exists ...
	I0429 00:28:28.085730   53903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:28:28.085783   53903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:28:28.099582   53903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I0429 00:28:28.099942   53903 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:28:28.100360   53903 main.go:141] libmachine: Using API Version  1
	I0429 00:28:28.100383   53903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:28:28.100696   53903 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:28:28.100846   53903 main.go:141] libmachine: (multinode-061470-m02) Calling .DriverName
	I0429 00:28:28.101020   53903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 00:28:28.101044   53903 main.go:141] libmachine: (multinode-061470-m02) Calling .GetSSHHostname
	I0429 00:28:28.103265   53903 main.go:141] libmachine: (multinode-061470-m02) DBG | domain multinode-061470-m02 has defined MAC address 52:54:00:ea:fa:31 in network mk-multinode-061470
	I0429 00:28:28.103650   53903 main.go:141] libmachine: (multinode-061470-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:fa:31", ip: ""} in network mk-multinode-061470: {Iface:virbr1 ExpiryTime:2024-04-29 01:26:56 +0000 UTC Type:0 Mac:52:54:00:ea:fa:31 Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:multinode-061470-m02 Clientid:01:52:54:00:ea:fa:31}
	I0429 00:28:28.103675   53903 main.go:141] libmachine: (multinode-061470-m02) DBG | domain multinode-061470-m02 has defined IP address 192.168.39.153 and MAC address 52:54:00:ea:fa:31 in network mk-multinode-061470
	I0429 00:28:28.103825   53903 main.go:141] libmachine: (multinode-061470-m02) Calling .GetSSHPort
	I0429 00:28:28.104007   53903 main.go:141] libmachine: (multinode-061470-m02) Calling .GetSSHKeyPath
	I0429 00:28:28.104145   53903 main.go:141] libmachine: (multinode-061470-m02) Calling .GetSSHUsername
	I0429 00:28:28.104259   53903 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17977-13393/.minikube/machines/multinode-061470-m02/id_rsa Username:docker}
	I0429 00:28:28.186464   53903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 00:28:28.202954   53903 status.go:257] multinode-061470-m02 status: &{Name:multinode-061470-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 00:28:28.202988   53903 status.go:255] checking status of multinode-061470-m03 ...
	I0429 00:28:28.203274   53903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 00:28:28.203308   53903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 00:28:28.218547   53903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41449
	I0429 00:28:28.218970   53903 main.go:141] libmachine: () Calling .GetVersion
	I0429 00:28:28.219460   53903 main.go:141] libmachine: Using API Version  1
	I0429 00:28:28.219480   53903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 00:28:28.219793   53903 main.go:141] libmachine: () Calling .GetMachineName
	I0429 00:28:28.220008   53903 main.go:141] libmachine: (multinode-061470-m03) Calling .GetState
	I0429 00:28:28.221449   53903 status.go:330] multinode-061470-m03 host status = "Stopped" (err=<nil>)
	I0429 00:28:28.221460   53903 status.go:343] host is not running, skipping remaining checks
	I0429 00:28:28.221466   53903 status.go:257] multinode-061470-m03 status: &{Name:multinode-061470-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-061470 node start m03 -v=7 --alsologtostderr: (29.923871108s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-061470 node delete m03: (1.867891942s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (183.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-061470 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-061470 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m2.766349055s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-061470 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (183.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-061470
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-061470-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-061470-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (72.767997ms)

                                                
                                                
-- stdout --
	* [multinode-061470-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-061470-m02' is duplicated with machine name 'multinode-061470-m02' in profile 'multinode-061470'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-061470-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-061470-m03 --driver=kvm2  --container-runtime=crio: (47.92704675s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-061470
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-061470: exit status 80 (230.686224ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-061470 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-061470-m03 already exists in multinode-061470-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-061470-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.05s)

                                                
                                    
x
+
TestScheduledStopUnix (115.57s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-806139 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-806139 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.881061545s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-806139 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-806139 -n scheduled-stop-806139
E0429 00:45:48.628549   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-806139 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-806139 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-806139 -n scheduled-stop-806139
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-806139
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-806139 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-806139
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-806139: exit status 7 (84.890297ms)

                                                
                                                
-- stdout --
	scheduled-stop-806139
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-806139 -n scheduled-stop-806139
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-806139 -n scheduled-stop-806139: exit status 7 (74.345397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-806139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-806139
--- PASS: TestScheduledStopUnix (115.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (224.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1924130133 start -p running-upgrade-127682 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1924130133 start -p running-upgrade-127682 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m9.630929945s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-127682 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-127682 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.884711954s)
helpers_test.go:175: Cleaning up "running-upgrade-127682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-127682
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-127682: (1.156410497s)
--- PASS: TestRunningBinaryUpgrade (224.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-069355 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-069355 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (92.264294ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-069355] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17977
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17977-13393/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17977-13393/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-069355 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-069355 --driver=kvm2  --container-runtime=crio: (1m34.471836308s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-069355 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (146.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.207831714 start -p stopped-upgrade-634323 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.207831714 start -p stopped-upgrade-634323 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m32.49001378s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.207831714 -p stopped-upgrade-634323 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.207831714 -p stopped-upgrade-634323 stop: (2.125292019s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-634323 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-634323 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.340493466s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (146.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (70.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-069355 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-069355 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m8.363686603s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-069355 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-069355 status -o json: exit status 2 (259.051242ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-069355","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-069355
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-069355: (1.630789138s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (70.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-069355 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-069355 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.169279518s)
--- PASS: TestNoKubernetes/serial/Start (30.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-069355 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-069355 "sudo systemctl is-active --quiet service kubelet": exit status 1 (222.753897ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (34.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.138653691s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0429 00:50:31.676156   20727 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17977-13393/.minikube/profiles/functional-243137/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (19.129845833s)
--- PASS: TestNoKubernetes/serial/ProfileList (34.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-069355
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-069355: (1.513768501s)
--- PASS: TestNoKubernetes/serial/Stop (1.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-069355 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-069355 --driver=kvm2  --container-runtime=crio: (22.440809707s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.44s)

                                                
                                    
x
+
TestPause/serial/Start (78.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-934652 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-934652 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m18.88985929s)
--- PASS: TestPause/serial/Start (78.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-634323
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-069355 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-069355 "sudo systemctl is-active --quiet service kubelet": exit status 1 (223.73212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    

Test skip (32/207)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard